Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei
2016-01-01
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.
Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei
2016-01-01
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074
Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.
Dai, Chenyun; Zheng, Yang; Hu, Xiaogang
2018-01-01
Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.
Application of neural models as controllers in mobile robot velocity control loop
NASA Astrophysics Data System (ADS)
Cerkala, Jakub; Jadlovska, Anna
2017-01-01
This paper presents the application of an inverse neural models used as controllers in comparison to classical PI controllers for velocity tracking control task used in two-wheel, differentially driven mobile robot. The PI controller synthesis is based on linear approximation of actuators with equivalent load. In order to obtain relevant datasets for training of feed-forward multi-layer perceptron based neural network used as neural model, the mathematical model of mobile robot, that combines its kinematic and dynamic properties such as chassis dimensions, center of gravity offset, friction and actuator parameters is used. Neural models are trained off-line to act as an inverse dynamics of DC motors with particular load using data collected in simulation experiment for motor input voltage step changes within bounded operating area. The performances of PI controllers versus inverse neural models in mobile robot internal velocity control loops are demonstrated and compared in simulation experiment of navigation control task for line segment motion in plane.
Robustness of a distributed neural network controller for locomotion in a hexapod robot
NASA Technical Reports Server (NTRS)
Chiel, Hillel J.; Beer, Randall D.; Quinn, Roger D.; Espenschied, Kenneth S.
1992-01-01
A distributed neural-network controller for locomotion, based on insect neurobiology, has been used to control a hexapod robot. How robust is this controller? Disabling any single sensor, effector, or central component did not prevent the robot from walking. Furthermore, statically stable gaits could be established using either sensor input or central connections. Thus, a complex interplay between central neural elements and sensor inputs is responsible for the robustness of the controller and its ability to generate a continuous range of gaits. These results suggest that biologically inspired neural-network controllers may be a robust method for robotic control.
Vazquez, Luis A; Jurado, Francisco; Castaneda, Carlos E; Santibanez, Victor
2018-02-01
This paper presents a continuous-time decentralized neural control scheme for trajectory tracking of a two degrees of freedom direct drive vertical robotic arm. A decentralized recurrent high-order neural network (RHONN) structure is proposed to identify online, in a series-parallel configuration and using the filtered error learning law, the dynamics of the plant. Based on the RHONN subsystems, a local neural controller is derived via backstepping approach. The effectiveness of the decentralized neural controller is validated on a robotic arm platform, of our own design and unknown parameters, which uses industrial servomotors to drive the joints.
Feasibility study of robotic neural controllers
NASA Technical Reports Server (NTRS)
Magana, Mario E.
1990-01-01
The results are given of a feasibility study performed to establish if an artificial neural controller could be used to achieve joint space trajectory tracking of a two-link robot manipulator. The study is based on the results obtained by Hecht-Nielsen, who claims that a functional map can be implemented to a desired degree of accuracy with a three layer feedforward artificial neural network. Central to this study is the assumption that the robot model as well as its parameters values are known.
Kim, Yoon Jae; Park, Sung Woo; Yeom, Hong Gi; Bang, Moon Suk; Kim, June Sic; Chung, Chun Kee; Kim, Sungwan
2015-08-20
A brain-machine interface (BMI) should be able to help people with disabilities by replacing their lost motor functions. To replace lost functions, robot arms have been developed that are controlled by invasive neural signals. Although invasive neural signals have a high spatial resolution, non-invasive neural signals are valuable because they provide an interface without surgery. Thus, various researchers have developed robot arms driven by non-invasive neural signals. However, robot arm control based on the imagined trajectory of a human hand can be more intuitive for patients. In this study, therefore, an integrated robot arm-gripper system (IRAGS) that is driven by three-dimensional (3D) hand trajectories predicted from non-invasive neural signals was developed and verified. The IRAGS was developed by integrating a six-degree of freedom robot arm and adaptive robot gripper. The system was used to perform reaching and grasping motions for verification. The non-invasive neural signals, magnetoencephalography (MEG) and electroencephalography (EEG), were obtained to control the system. The 3D trajectories were predicted by multiple linear regressions. A target sphere was placed at the terminal point of the real trajectories, and the system was commanded to grasp the target at the terminal point of the predicted trajectories. The average correlation coefficient between the predicted and real trajectories in the MEG case was [Formula: see text] ([Formula: see text]). In the EEG case, it was [Formula: see text] ([Formula: see text]). The success rates in grasping the target plastic sphere were 18.75 and 7.50 % with MEG and EEG, respectively. The success rates of touching the target were 52.50 and 58.75 % respectively. A robot arm driven by 3D trajectories predicted from non-invasive neural signals was implemented, and reaching and grasping motions were performed. In most cases, the robot closely approached the target, but the success rate was not very high because the non-invasive neural signal is less accurate. However the success rate could be sufficiently improved for practical applications by using additional sensors. Robot arm control based on hand trajectories predicted from EEG would allow for portability, and the performance with EEG was comparable to that with MEG.
A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)
Dülger, L. Canan; Kapucu, Sadettin
2016-01-01
This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129
Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin
2016-01-01
This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world. PMID:26528176
Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network
2015-01-01
For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system. PMID:26089863
ERIC Educational Resources Information Center
Doty, Keith L.
1999-01-01
Research on neural networks and hippocampal function demonstrating how mammals construct mental maps and develop navigation strategies is being used to create Intelligent Autonomous Mobile Robots (IAMRs). Such robots are able to recognize landmarks and navigate without "vision." (SK)
Compliance control with embedded neural elements
NASA Technical Reports Server (NTRS)
Venkataraman, S. T.; Gulati, S.
1992-01-01
The authors discuss a control approach that embeds the neural elements within a model-based compliant control architecture for robotic tasks that involve contact with unstructured environments. Compliance control experiments have been performed on actual robotics hardware to demonstrate the performance of contact control schemes with neural elements. System parameters were identified under the assumption that environment dynamics have a fixed nonlinear structure. A robotics research arm, placed in contact with a single degree-of-freedom electromechanical environment dynamics emulator, was commanded to move through a desired trajectory. The command was implemented by using a compliant control strategy.
Learning robot actions based on self-organising language memory.
Wermter, Stefan; Elshaw, Mark
2003-01-01
In the MirrorBot project we examine perceptual processes using models of cortical assemblies and mirror neurons to explore the emergence of semantic representations of actions, percepts and concepts in a neural robot. The hypothesis under investigation is whether a neural model will produce a life-like perception system for actions. In this context we focus in this paper on how instructions for actions can be modeled in a self-organising memory. Current approaches for robot control often do not use language and ignore neural learning. However, our approach uses language instruction and draws from the concepts of regional distributed modularity, self-organisation and neural assemblies. We describe a self-organising model that clusters actions into different locations depending on the body part they are associated with. In particular, we use actual sensor readings from the MIRA robot to represent semantic features of the action verbs. Furthermore, we outline a hierarchical computational model for a self-organising robot action control system using language for instruction.
Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub
2015-01-01
An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986
Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub
2015-10-30
An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.
NASA Astrophysics Data System (ADS)
Patkin, M. L.; Rogachev, G. N.
2018-02-01
A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.
A new neural net approach to robot 3D perception and visuo-motor coordination
NASA Technical Reports Server (NTRS)
Lee, Sukhan
1992-01-01
A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217
A GPU-accelerated cortical neural network model for visually guided robot navigation.
Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L
2015-12-01
Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.
Center for Neural Engineering: applications of pulse-coupled neural networks
NASA Astrophysics Data System (ADS)
Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel
1999-03-01
Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.
Neural network-based landmark detection for mobile robot
NASA Astrophysics Data System (ADS)
Sekiguchi, Minoru; Okada, Hiroyuki; Watanabe, Nobuo
1996-03-01
The mobile robot can essentially have only the relative position data for the real world. However, there are many cases that the robot has to know where it is located. In those cases, the useful method is to detect landmarks in the real world and adjust its position using detected landmarks. In this point of view, it is essential to develop a mobile robot that can accomplish the path plan successfully using natural or artificial landmarks. However, artificial landmarks are often difficult to construct and natural landmarks are very complicated to detect. In this paper, the method of acquiring landmarks by using the sensor data from the mobile robot necessary for planning the path is described. The landmark we discuss here is the natural one and is composed of the compression of sensor data from the robot. The sensor data is compressed and memorized by using five layered neural network that is called a sand glass model. The input and output data that neural network should learn is the sensor data of the robot that are exactly the same. Using the intermediate output data of the network, a compressed data is obtained, which expresses a landmark data. If the sensor data is ambiguous or enormous, it is easy to detect the landmark because the data is compressed and classified by the neural network. Using the backward three layers, the compressed landmark data is expanded to original data at some level. The studied neural network categorizes the detected sensor data to the known landmark.
A neural network-based exploratory learning and motor planning system for co-robots
Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640
A neural network-based exploratory learning and motor planning system for co-robots.
Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.
Neural net target-tracking system using structured laser patterns
NASA Astrophysics Data System (ADS)
Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun
1996-06-01
In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.
Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C; Morrison, Abigail
2016-01-01
In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning.
Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C.; Morrison, Abigail
2016-01-01
In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning. PMID:27536234
Li, Yongcheng; Sun, Rong; Zhang, Bin; Wang, Yuechao; Li, Hongyi
2015-01-01
Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.
Zhang, Bin; Wang, Yuechao; Li, Hongyi
2015-01-01
Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including ‘random’ and ‘4Q’ (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the ‘4Q’ cultures presented absolutely different activities, and the robot controlled by the ‘4Q’ network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems. PMID:25992579
Control of autonomous robot using neural networks
NASA Astrophysics Data System (ADS)
Barton, Adam; Volna, Eva
2017-07-01
The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.
Neural joint control for Space Shuttle Remote Manipulator System
NASA Technical Reports Server (NTRS)
Atkins, Mark A.; Cox, Chadwick J.; Lothers, Michael D.; Pap, Robert M.; Thomas, Charles R.
1992-01-01
Neural networks are being used to control a robot arm in a telerobotic operation. The concept uses neural networks for both joint and inverse kinematics in a robotic control application. An upper level neural network is trained to learn inverse kinematic mappings. The output, a trajectory, is then fed to the Decentralized Adaptive Joint Controllers. This neural network implementation has shown that the controlled arm recovers from unexpected payload changes while following the reference trajectory. The neural network-based decentralized joint controller is faster, more robust and efficient than conventional approaches. Implementations of this architecture are discussed that would relax assumptions about dynamics, obstacles, and heavy loads. This system is being developed to use with the Space Shuttle Remote Manipulator System.
Center for Neural Engineering at Tennessee State University, ASSERT Annual Progress Report.
1995-07-01
neural networks . Their research topics are: (1) developing frequency dependent oscillatory neural networks ; (2) long term pontentiation learning rules...as applied to spatial navigation; (3) design and build a servo joint robotic arm and (4) neural network based prothesis control. One graduate student
Indirect iterative learning control for a discrete visual servo without a camera-robot model.
Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan
2007-08-01
This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.
Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J
2012-10-01
The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.
Yue, Shigang; Rind, F Claire
2006-05-01
The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds.
Adaptive control strategies for flexible robotic arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
Biologically inspired adaptive walking of a quadruped robot.
Kimura, Hiroshi; Fukuoka, Yasuhiro; Cohen, Avis H
2007-01-15
We describe here the efforts to induce a quadruped robot to walk with medium-walking speed on irregular terrain based on biological concepts. We propose the necessary conditions for stable dynamic walking on irregular terrain in general, and we design the mechanical and the neural systems by comparing biological concepts with those necessary conditions described in physical terms. PD-controller at joints constructs the virtual spring-damper system as the viscoelasticity model of a muscle. The neural system model consists of a central pattern generator (CPG), reflexes and responses. We validate the effectiveness of the proposed neural system model control using the quadruped robots called 'Tekken1&2'. MPEG footage of experiments can be seen at http://www.kimura.is.uec.ac.jp.
Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883
Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.
Integrating Artificial Immune, Neural and Endrocine Systems in Autonomous Sailing Robots
2010-09-24
system - Development of an adaptive hormone system capable of changing operation and control of the neural network depending on changing enviromental ...and control of the neural network depending on changing enviromental conditions • First basic design of the MOOP and a simple neural-endocrine based
Neural Network Based Sensory Fusion for Landmark Detection
NASA Technical Reports Server (NTRS)
Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.
1997-01-01
NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
A limit-cycle self-organizing map architecture for stable arm control.
Huang, Di-Wei; Gentili, Rodolphe J; Katz, Garrett E; Reggia, James A
2017-01-01
Inspired by the oscillatory nature of cerebral cortex activity, we recently proposed and studied self-organizing maps (SOMs) based on limit cycle neural activity in an attempt to improve the information efficiency and robustness of conventional single-node, single-pattern representations. Here we explore for the first time the use of limit cycle SOMs to build a neural architecture that controls a robotic arm by solving inverse kinematics in reach-and-hold tasks. This multi-map architecture integrates open-loop and closed-loop controls that learn to self-organize oscillatory neural representations and to harness non-fixed-point neural activity even for fixed-point arm reaching tasks. We show through computer simulations that our architecture generalizes well, achieves accurate, fast, and smooth arm movements, and is robust in the face of arm perturbations, map damage, and variations of internal timing parameters controlling the flow of activity. A robotic implementation is evaluated successfully without further training, demonstrating for the first time that limit cycle maps can control a physical robot arm. We conclude that architectures based on limit cycle maps can be organized to function effectively as neural controllers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Learning and adaptation: neural and behavioural mechanisms behind behaviour change
NASA Astrophysics Data System (ADS)
Lowe, Robert; Sandamirskaya, Yulia
2018-01-01
This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2018-04-01
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu
2013-10-08
In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.
SVR versus neural-fuzzy network controllers for the sagittal balance of a biped robot.
Ferreira, João P; Crisóstomo, Manuel M; Coimbra, A Paulo
2009-12-01
The real-time balance control of an eight-link biped robot using a zero moment point (ZMP) dynamic model is difficult due to the processing time of the corresponding equations. To overcome this limitation, two alternative intelligent computing control techniques were compared: one based on support vector regression (SVR) and another based on a first-order Takagi-Sugeno-Kang (TSK)-type neural-fuzzy (NF) network. Both methods use the ZMP error and its variation as inputs and the output is the correction of the robot's torso necessary for its sagittal balance. The SVR and the NF were trained based on simulation data and their performance was verified with a real biped robot. Two performance indexes are proposed to evaluate and compare the online performance of the two control methods. The ZMP is calculated by reading four force sensors placed under each robot's foot. The gait implemented in this biped is similar to a human gait that was acquired and adapted to the robot's size. Some experiments are presented and the results show that the implemented gait combined either with the SVR controller or with the TSK NF network controller can be used to control this biped robot. The SVR and the NF controllers exhibit similar stability, but the SVR controller runs about 50 times faster.
Li, Zhijun; Ge, Shuzhi Sam; Liu, Sibang
2014-08-01
This paper investigates optimal feet forces' distribution and control of quadruped robots under external disturbance forces. First, we formulate a constrained dynamics of quadruped robots and derive a reduced-order dynamical model of motion/force. Consider an external wrench on quadruped robots; the distribution of required forces and moments on the supporting legs of a quadruped robot is handled as a tip-point force distribution and used to equilibrate the external wrench. Then, a gradient neural network is adopted to deal with the optimized objective function formulated as to minimize this quadratic objective function subjected to linear equality and inequality constraints. For the obtained optimized tip-point force and the motion of legs, we propose the hybrid motion/force control based on an adaptive neural network to compensate for the perturbations in the environment and approximate feedforward force and impedance of the leg joints. The proposed control can confront the uncertainties including approximation error and external perturbation. The verification of the proposed control is conducted using a simulation.
Aerial robot intelligent control method based on back-stepping
NASA Astrophysics Data System (ADS)
Zhou, Jian; Xue, Qian
2018-05-01
The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.
Self-organization via active exploration in robotic applications
NASA Technical Reports Server (NTRS)
Ogmen, H.; Prakash, R. V.
1992-01-01
We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization of sequence of actions.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Method for neural network control of motion using real-time environmental feedback
NASA Technical Reports Server (NTRS)
Buckley, Theresa M. (Inventor)
1997-01-01
A method of motion control for robotics and other automatically controlled machinery using a neural network controller with real-time environmental feedback. The method is illustrated with a two-finger robotic hand having proximity sensors and force sensors that provide environmental feedback signals. The neural network controller is taught to control the robotic hand through training sets using back- propagation methods. The training sets are created by recording the control signals and the feedback signal as the robotic hand or a simulation of the robotic hand is moved through a representative grasping motion. The data recorded is divided into discrete increments of time and the feedback data is shifted out of phase with the control signal data so that the feedback signal data lag one time increment behind the control signal data. The modified data is presented to the neural network controller as a training set. The time lag introduced into the data allows the neural network controller to account for the temporal component of the robotic motion. Thus trained, the neural network controlled robotic hand is able to grasp a wide variety of different objects by generalizing from the training sets.
Wang, Yin
2015-01-01
Notwithstanding the significant role that human–robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human–human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal–parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. PMID:25911418
An Intelligent Agent Approach for Teaching Neural Networks Using LEGO[R] Handy Board Robots
ERIC Educational Resources Information Center
Imberman, Susan P.
2004-01-01
In this article we describe a project for an undergraduate artificial intelligence class. The project teaches neural networks using LEGO[R] handy board robots. Students construct robots with two motors and two photosensors. Photosensors provide readings that act as inputs for the neural network. Output values power the motors and maintain the…
Neural network-based multiple robot simultaneous localization and mapping.
Saeedi, Sajad; Paull, Liam; Trentini, Michael; Li, Howard
2011-12-01
In this paper, a decentralized platform for simultaneous localization and mapping (SLAM) with multiple robots is developed. Each robot performs single robot view-based SLAM using an extended Kalman filter to fuse data from two encoders and a laser ranger. To extend this approach to multiple robot SLAM, a novel occupancy grid map fusion algorithm is proposed. Map fusion is achieved through a multistep process that includes image preprocessing, map learning (clustering) using neural networks, relative orientation extraction using norm histogram cross correlation and a Radon transform, relative translation extraction using matching norm vectors, and then verification of the results. The proposed map learning method is a process based on the self-organizing map. In the learning phase, the obstacles of the map are learned by clustering the occupied cells of the map into clusters. The learning is an unsupervised process which can be done on the fly without any need to have output training patterns. The clusters represent the spatial form of the map and make further analyses of the map easier and faster. Also, clusters can be interpreted as features extracted from the occupancy grid map so the map fusion problem becomes a task of matching features. Results of the experiments from tests performed on a real environment with multiple robots prove the effectiveness of the proposed solution.
Rare Neural Correlations Implement Robotic Conditioning with Delayed Rewards and Disturbances
Soltoggio, Andrea; Lemme, Andre; Reinhart, Felix; Steil, Jochen J.
2013-01-01
Neural conditioning associates cues and actions with following rewards. The environments in which robots operate, however, are pervaded by a variety of disturbing stimuli and uncertain timing. In particular, variable reward delays make it difficult to reconstruct which previous actions are responsible for following rewards. Such an uncertainty is handled by biological neural networks, but represents a challenge for computational models, suggesting the lack of a satisfactory theory for robotic neural conditioning. The present study demonstrates the use of rare neural correlations in making correct associations between rewards and previous cues or actions. Rare correlations are functional in selecting sparse synapses to be eligible for later weight updates if a reward occurs. The repetition of this process singles out the associating and reward-triggering pathways, and thereby copes with distal rewards. The neural network displays macro-level classical and operant conditioning, which is demonstrated in an interactive real-life human-robot interaction. The proposed mechanism models realistic conditioning in humans and animals and implements similar behaviors in neuro-robotic platforms. PMID:23565092
Path optimisation of a mobile robot using an artificial neural network controller
NASA Astrophysics Data System (ADS)
Singh, M. K.; Parhi, D. R.
2011-01-01
This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
NASA Astrophysics Data System (ADS)
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-02-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.
Quadrupedal Robot Locomotion: A Biologically Inspired Approach and Its Hardware Implementation
Espinal, A.; Rostro-Gonzalez, H.; Carpio, M.; Guerra-Hernandez, E. I.; Ornelas-Rodriguez, M.; Puga-Soberanes, H. J.; Sotelo-Figueroa, M. A.; Melin, P.
2016-01-01
A bioinspired locomotion system for a quadruped robot is presented. Locomotion is achieved by a spiking neural network (SNN) that acts as a Central Pattern Generator (CPG) producing different locomotion patterns represented by their raster plots. To generate these patterns, the SNN is configured with specific parameters (synaptic weights and topologies), which were estimated by a metaheuristic method based on Christiansen Grammar Evolution (CGE). The system has been implemented and validated on two robot platforms; firstly, we tested our system on a quadruped robot and, secondly, on a hexapod one. In this last one, we simulated the case where two legs of the hexapod were amputated and its locomotion mechanism has been changed. For the quadruped robot, the control is performed by the spiking neural network implemented on an Arduino board with 35% of resource usage. In the hexapod robot, we used Spartan 6 FPGA board with only 3% of resource usage. Numerical results show the effectiveness of the proposed system in both cases. PMID:27436997
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-01-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862
Quadrupedal Robot Locomotion: A Biologically Inspired Approach and Its Hardware Implementation.
Espinal, A; Rostro-Gonzalez, H; Carpio, M; Guerra-Hernandez, E I; Ornelas-Rodriguez, M; Puga-Soberanes, H J; Sotelo-Figueroa, M A; Melin, P
2016-01-01
A bioinspired locomotion system for a quadruped robot is presented. Locomotion is achieved by a spiking neural network (SNN) that acts as a Central Pattern Generator (CPG) producing different locomotion patterns represented by their raster plots. To generate these patterns, the SNN is configured with specific parameters (synaptic weights and topologies), which were estimated by a metaheuristic method based on Christiansen Grammar Evolution (CGE). The system has been implemented and validated on two robot platforms; firstly, we tested our system on a quadruped robot and, secondly, on a hexapod one. In this last one, we simulated the case where two legs of the hexapod were amputated and its locomotion mechanism has been changed. For the quadruped robot, the control is performed by the spiking neural network implemented on an Arduino board with 35% of resource usage. In the hexapod robot, we used Spartan 6 FPGA board with only 3% of resource usage. Numerical results show the effectiveness of the proposed system in both cases.
Adaptive walking of a quadrupedal robot based on layered biological reflexes
NASA Astrophysics Data System (ADS)
Zhang, Xiuli; Mingcheng, E.; Zeng, Xiangyu; Zheng, Haojun
2012-07-01
A multiple-legged robot is traditionally controlled by using its dynamic model. But the dynamic-model-based approach fails to acquire satisfactory performances when the robot faces rough terrains and unknown environments. Referring animals' neural control mechanisms, a control model is built for a quadruped robot walking adaptively. The basic rhythmic motion of the robot is controlled by a well-designed rhythmic motion controller(RMC) comprising a central pattern generator(CPG) for hip joints and a rhythmic coupler (RC) for knee joints. CPG and RC have relationships of motion-mapping and rhythmic couple. Multiple sensory-motor models, abstracted from the neural reflexes of a cat, are employed. These reflex models are organized and thus interact with the CPG in three layers, to meet different requirements of complexity and response time to the tasks. On the basis of the RMC and layered biological reflexes, a quadruped robot is constructed, which can clear obstacles and walk uphill and downhill autonomously, and make a turn voluntarily in uncertain environments, interacting with the environment in a way similar to that of an animal. The paper provides a biologically inspired architecture, with which a robot can walk adaptively in uncertain environments in a simple and effective way, and achieve better performances.
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Li, Lanfei; Sun, Hanxu
2017-12-01
Spherical flying robot can perform various tasks in the complex and varied environment to reduce labor costs. However, it is difficult to guarantee the stability of the spherical flying robot in the case of strong coupling and time-varying disturbance. In this paper, an artificial neural network controller (ANNC) based on MPSO-BFGS hybrid optimization algorithm is proposed. The MPSO algorithm is used to optimize the initial weights of the controller to avoid the local optimal solution. The BFGS algorithm is introduced to improve the convergence ability of the network. We use Lyapunov method to analyze the stability of ANNC. The controller is simulated under the condition of nonlinear coupling disturbance. The experimental results show that the proposed controller can obtain the expected value in shoter time compared with the other considered methods.
SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.
Jimenez-Romero, Cristian; Johnson, Jeffrey
2017-01-01
The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.
Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems
2015-05-19
algorithm based on Age-Fitness Pareto Optimization (AFPO) ([9]) with an additional user prefer- ence objective and a neural network-based user model, we...greater than 40, which is about 5 times further than any robot traveled in our experiments. 6 3.3 Methods The algorithm uses a client -server computational...architecture. The client here is an interactive pro- gram which takes a pair of controllers as input, simulates4 two copies of the robot with
Neural self-tuning adaptive control of non-minimum phase system
NASA Technical Reports Server (NTRS)
Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
Nanowire FET Based Neural Element for Robotic Tactile Sensing Skin
Taube Navaraj, William; García Núñez, Carlos; Shakthivel, Dhayalan; Vinciguerra, Vincenzo; Labeau, Fabrice; Gregory, Duncan H.; Dahiya, Ravinder
2017-01-01
This paper presents novel Neural Nanowire Field Effect Transistors (υ-NWFETs) based hardware-implementable neural network (HNN) approach for tactile data processing in electronic skin (e-skin). The viability of Si nanowires (NWs) as the active material for υ-NWFETs in HNN is explored through modeling and demonstrated by fabricating the first device. Using υ-NWFETs to realize HNNs is an interesting approach as by printing NWs on large area flexible substrates it will be possible to develop a bendable tactile skin with distributed neural elements (for local data processing, as in biological skin) in the backplane. The modeling and simulation of υ-NWFET based devices show that the overlapping areas between individual gates and the floating gate determines the initial synaptic weights of the neural network - thus validating the working of υ-NWFETs as the building block for HNN. The simulation has been further extended to υ-NWFET based circuits and neuronal computation system and this has been validated by interfacing it with a transparent tactile skin prototype (comprising of 6 × 6 ITO based capacitive tactile sensors array) integrated on the palm of a 3D printed robotic hand. In this regard, a tactile data coding system is presented to detect touch gesture and the direction of touch. Following these simulation studies, a four-gated υ-NWFET is fabricated with Pt/Ti metal stack for gates, source and drain, Ni floating gate, and Al2O3 high-k dielectric layer. The current-voltage characteristics of fabricated υ-NWFET devices confirm the dependence of turn-off voltages on the (synaptic) weight of each gate. The presented υ-NWFET approach is promising for a neuro-robotic tactile sensory system with distributed computing as well as numerous futuristic applications such as prosthetics, and electroceuticals. PMID:28979183
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-01-01
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-06-12
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
A neural-network approach to robotic control
NASA Technical Reports Server (NTRS)
Graham, D. P. W.; Deleuterio, G. M. T.
1993-01-01
An artificial neural-network paradigm for the control of robotic systems is presented. The approach is based on the Cerebellar Model Articulation Controller created by James Albus and incorporates several extensions. First, recognizing the essential structure of multibody equations of motion, two parallel modules are used that directly reflect the dynamical characteristics of multibody systems. Second, the architecture of the proposed network is imbued with a self-organizational capability which improves efficiency and accuracy. Also, the networks can be arranged in hierarchical fashion with each subsequent network providing finer and finer resolution.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics
Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni
2015-01-01
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645
Parametric motion control of robotic arms: A biologically based approach using neural networks
NASA Technical Reports Server (NTRS)
Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.
1993-01-01
A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2008-10-01
In this paper, we propose a new robust output feedback control approach for flexible-joint electrically driven (FJED) robots via the observer dynamic surface design technique. The proposed method only requires position measurements of the FJED robots. To estimate the link and actuator velocity information of the FJED robots with model uncertainties, we develop an adaptive observer using self-recurrent wavelet neural networks (SRWNNs). The SRWNNs are used to approximate model uncertainties in both robot (link) dynamics and actuator dynamics, and all their weights are trained online. Based on the designed observer, the link position tracking controller using the estimated states is induced from the dynamic surface design procedure. Therefore, the proposed controller can be designed more simply than the observer backstepping controller. From the Lyapunov stability analysis, it is shown that all signals in a closed-loop adaptive system are uniformly ultimately bounded. Finally, the simulation results on a three-link FJED robot are presented to validate the good position tracking performance and robustness of the proposed control system against payload uncertainties and external disturbances.
Biologically-inspired adaptive obstacle negotiation behavior of hexapod robots
Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate
2014-01-01
Neurobiological studies have shown that insects are able to adapt leg movements and posture for obstacle negotiation in changing environments. Moreover, the distance to an obstacle where an insect begins to climb is found to be a major parameter for successful obstacle negotiation. Inspired by these findings, we present an adaptive neural control mechanism for obstacle negotiation behavior in hexapod robots. It combines locomotion control, backbone joint control, local leg reflexes, and neural learning. While the first three components generate locomotion including walking and climbing, the neural learning mechanism allows the robot to adapt its behavior for obstacle negotiation with respect to changing conditions, e.g., variable obstacle heights and different walking gaits. By successfully learning the association of an early, predictive signal (conditioned stimulus, CS) and a late, reflex signal (unconditioned stimulus, UCS), both provided by ultrasonic sensors at the front of the robot, the robot can autonomously find an appropriate distance from an obstacle to initiate climbing. The adaptive neural control was developed and tested first on a physical robot simulation, and was then successfully transferred to a real hexapod robot, called AMOS II. The results show that the robot can efficiently negotiate obstacles with a height up to 85% of the robot's leg length in simulation and 75% in a real environment. PMID:24523694
Distributed neural control of a hexapod walking vehicle
NASA Technical Reports Server (NTRS)
Beer, R. D.; Sterling, L. S.; Quinn, R. D.; Chiel, H. J.; Ritzmann, R.
1989-01-01
There has been a long standing interest in the design of controllers for multilegged vehicles. The approach is to apply distributed control to this problem, rather than using parallel computing of a centralized algorithm. Researchers describe a distributed neural network controller for hexapod locomotion which is based on the neural control of locomotion in insects. The model considers the simplified kinematics with two degrees of freedom per leg, but the model includes the static stability constraint. Through simulation, it is demonstrated that this controller can generate a continuous range of statically stable gaits at different speeds by varying a single control parameter. In addition, the controller is extremely robust, and can continue the function even after several of its elements have been disabled. Researchers are building a small hexapod robot whose locomotion will be controlled by this network. Researchers intend to extend their model to the dynamic control of legs with more than two degrees of freedom by using data on the control of multisegmented insect legs. Another immediate application of this neural control approach is also exhibited in biology: the escape reflex. Advanced robots are being equipped with tactile sensing and machine vision so that the sensory inputs to the robot controller are vast and complex. Neural networks are ideal for a lower level safety reflex controller because of their extremely fast response time. The combination of robotics, computer modeling, and neurobiology has been remarkably fruitful, and is likely to lead to deeper insights into the problems of real time sensorimotor control.
Deep Gate Recurrent Neural Network
2016-11-22
Schmidhuber. A system for robotic heart surgery that learns to tie knots using recurrent neural networks. In IEEE International Conference on...tasks, such as Machine Translation (Bahdanau et al. (2015)) or Robot Reinforcement Learning (Bakker (2001)). The main idea behind these networks is to...and J. Peters. Reinforcement learning in robotics : A survey. The International Journal of Robotics Research, 32:1238–1274, 2013. ISSN 0278-3649. doi
Robust Neural Sliding Mode Control of Robot Manipulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen Tran Hiep; Pham Thuong Cat
2009-03-05
This paper proposes a robust neural sliding mode control method for robot tracking problem to overcome the noises and large uncertainties in robot dynamics. The Lyapunov direct method has been used to prove the stability of the overall system. Simulation results are given to illustrate the applicability of the proposed method.
NASA Astrophysics Data System (ADS)
Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling
2017-09-01
In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters
Multi-layer neural networks for robot control
NASA Technical Reports Server (NTRS)
Pourboghrat, Farzad
1989-01-01
Two neural learning controller designs for manipulators are considered. The first design is based on a neural inverse-dynamics system. The second is the combination of the first one with a neural adaptive state feedback system. Both types of controllers enable the manipulator to perform any given task very well after a period of training and to do other untrained tasks satisfactorily. The second design also enables the manipulator to compensate for unpredictable perturbations.
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
NASA Technical Reports Server (NTRS)
Glass, Brian J.; Thompson, S.; Paulsen, G.
2010-01-01
Several proposed or planned planetary science missions to Mars and other Solar System bodies over the next decade require subsurface access by drilling. This paper discusses the problems of remote robotic drilling, an automation and control architecture based loosely on observed human behaviors in drilling on Earth, and an overview of robotic drilling field test results using this architecture since 2005. Both rotary-drag and rotary-percussive drills are targeted. A hybrid diagnostic approach incorporates heuristics, model-based reasoning and vibration monitoring with neural nets. Ongoing work leads to flight-ready drilling software.
Mobile robots traversability awareness based on terrain visual sensory data fusion
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2007-04-01
In this paper, we have presented methods that significantly improve the robot awareness of its terrain traversability conditions. The terrain traversability awareness is achieved by association of terrain image appearances from different poses and fusion of extracted information from multimodality imaging and range sensor data for localization and clustering environment landmarks. Initially, we describe methods for extraction of salient features of the terrain for the purpose of landmarks registration from two or more images taken from different via points along the trajectory path of the robot. The method of image registration is applied as a means of overlaying (two or more) of the same terrain scene at different viewpoints. The registration geometrically aligns salient landmarks of two images (the reference and sensed images). A Similarity matching techniques is proposed for matching the terrain salient landmarks. Secondly, we present three terrain classifier models based on rule-based, supervised neural network, and fuzzy logic for classification of terrain condition under uncertainty and mapping the robot's terrain perception to apt traversability measures. This paper addresses the technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain spatial and textural cues.
Neural architectures for robot intelligence.
Ritter, H; Steil, J J; Nölker, C; Röthling, F; McGuire, P
2003-01-01
We argue that direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data-mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our laboratory in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors. PMID:25120464
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.
Acquiring neural signals for developing a perception and cognition model
NASA Astrophysics Data System (ADS)
Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert
2012-06-01
The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.
Experiments in Neural-Network Control of a Free-Flying Space Robot
NASA Technical Reports Server (NTRS)
Wilson, Edward
1995-01-01
Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.
Autonomous learning in humanoid robotics through mental imagery.
Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo
2013-05-01
In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots. PMID:26441629
The efficacy of using human myoelectric signals to control the limbs of robots in space
NASA Technical Reports Server (NTRS)
Clark, Jane E.; Phillips, Sally J.
1988-01-01
This project was designed to investigate the usefulness of the myoelectric signal as a control in robotics applications. More specifically, the neural patterns associated with human arm and hand actions were studied to determine the efficacy of using these myoelectric signals to control the manipulator arm of a robot. The advantage of this approach to robotic control was the use of well-defined and well-practiced neural patterns already available to the system, as opposed to requiring the human operator to learn new tasks and establish new neural patterns in learning to control a joystick or mechanical coupling device.
Adaptive Control Strategies for Flexible Robotic Arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1996-01-01
The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.
Upper Torso Control for HOAP-2 Using Neural Networks
NASA Technical Reports Server (NTRS)
Sandoval, Steven P.
2005-01-01
Humanoid robots have similar physical builds and motion patterns as humans. Not only does this provide a suitable operating environment for the humanoid but it also opens up many research doors on how humans function. The overall objective is replacing humans operating in unsafe environments. A first target application is assembly of structures for future lunar-planetary bases. The initial development platform is a Fujitsu HOAP-2 humanoid robot. The goal for the project is to demonstrate the capability of a HOAP-2 to autonomously construct a cubic frame using provided tubes and joints. This task will require the robot to identify several items, pick them up, transport them to the build location, then properly assemble the structure. The ability to grasp and assemble the pieces will require improved motor control and the addition of tactile feedback sensors. In recent years, learning-based control is becoming more and more popular; for implementing this method we will be using the Adaptive Neural Fuzzy Inference System (ANFIS). When using neural networks for control, no complex models of the system must be constructed in advance-only input/output relationships are required to model the system.
NASA Astrophysics Data System (ADS)
Cao, Zhengcai; Yin, Longjie; Fu, Yili
2013-01-01
Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.
An architectural approach to create self organizing control systems for practical autonomous robots
NASA Technical Reports Server (NTRS)
Greiner, Helen
1991-01-01
For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.
Dual adaptive dynamic control of mobile robots using neural networks.
Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato
2009-02-01
This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.
Mobile robots exploration through cnn-based reinforcement learning.
Tai, Lei; Liu, Ming
2016-01-01
Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.
Biologically Inspired SNN for Robot Control.
Nichols, Eric; McDaid, Liam J; Siddique, Nazmul
2013-02-01
This paper proposes a spiking-neural-network-based robot controller inspired by the control structures of biological systems. Information is routed through the network using facilitating dynamic synapses with short-term plasticity. Learning occurs through long-term synaptic plasticity which is implemented using the temporal difference learning rule to enable the robot to learn to associate the correct movement with the appropriate input conditions. The network self-organizes to provide memories of environments that the robot encounters. A Pioneer robot simulator with laser and sonar proximity sensors is used to verify the performance of the network with a wall-following task, and the results are presented.
Cerebellum-inspired neural network solution of the inverse kinematics problem.
Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan
2015-12-01
The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot.
From embodied mind to embodied robotics: humanities and system theoretical aspects.
Mainzer, Klaus
2009-01-01
After an introduction (1) the article analyzes the evolution of the embodied mind (2), the innovation of embodied robotics (3), and finally discusses conclusions of embodied robotics for human responsibility (4). Considering the evolution of the embodied mind (2), we start with an introduction of complex systems and nonlinear dynamics (2.1), apply this approach to neural self-organization (2.2), distinguish degrees of complexity of the brain (2.3), explain the emergence of cognitive states by complex systems dynamics (2.4), and discuss criteria for modeling the brain as complex nonlinear system (2.5). The innovation of embodied robotics (3) is a challenge of future technology. We start with the distinction of symbolic and embodied AI (3.1) and explain embodied robots as dynamical systems (3.2). Self-organization needs self-control of technical systems (3.3). Cellular neural networks (CNN) are an example of self-organizing technical systems offering new avenues for neurobionics (3.4). In general, technical neural networks support different kinds of learning robots (3.5). Finally, embodied robotics aim at the development of cognitive and conscious robots (3.6).
Addressing the Movement of a Freescale Robotic Car Using Neural Network
NASA Astrophysics Data System (ADS)
Horváth, Dušan; Cuninka, Peter
2016-12-01
This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory) of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Adaptive artificial neural network for autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.
Motor-Skill Learning in an Insect Inspired Neuro-Computational Control System
Arena, Eleonora; Arena, Paolo; Strauss, Roland; Patané, Luca
2017-01-01
In nature, insects show impressive adaptation and learning capabilities. The proposed computational model takes inspiration from specific structures of the insect brain: after proposing key hypotheses on the direct involvement of the mushroom bodies (MBs) and on their neural organization, we developed a new architecture for motor learning to be applied in insect-like walking robots. The proposed model is a nonlinear control system based on spiking neurons. MBs are modeled as a nonlinear recurrent spiking neural network (SNN) with novel characteristics, able to memorize time evolutions of key parameters of the neural motor controller, so that existing motor primitives can be improved. The adopted control scheme enables the structure to efficiently cope with goal-oriented behavioral motor tasks. Here, a six-legged structure, showing a steady-state exponentially stable locomotion pattern, is exposed to the need of learning new motor skills: moving through the environment, the structure is able to modulate motor commands and implements an obstacle climbing procedure. Experimental results on a simulated hexapod robot are reported; they are obtained in a dynamic simulation environment and the robot mimicks the structures of Drosophila melanogaster. PMID:28337138
Sharma, Richa; Kumar, Vikas; Gaur, Prerna; Mittal, A P
2016-05-01
Being complex, non-linear and coupled system, the robotic manipulator cannot be effectively controlled using classical proportional-integral-derivative (PID) controller. To enhance the effectiveness of the conventional PID controller for the nonlinear and uncertain systems, gains of the PID controller should be conservatively tuned and should adapt to the process parameter variations. In this work, a mix locally recurrent neural network (MLRNN) architecture is investigated to mimic a conventional PID controller which consists of at most three hidden nodes which act as proportional, integral and derivative node. The gains of the mix locally recurrent neural network based PID (MLRNNPID) controller scheme are initialized with a newly developed cuckoo search algorithm (CSA) based optimization method rather than assuming randomly. A sequential learning based least square algorithm is then investigated for the on-line adaptation of the gains of MLRNNPID controller. The performance of the proposed controller scheme is tested against the plant parameters uncertainties and external disturbances for both links of the two link robotic manipulator with variable payload (TL-RMWVP). The stability of the proposed controller is analyzed using Lyapunov stability criteria. A performance comparison is carried out among MLRNNPID controller, CSA optimized NNPID (OPTNNPID) controller and CSA optimized conventional PID (OPTPID) controller in order to establish the effectiveness of the MLRNNPID controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
An assembly system based on industrial robot with binocular stereo vision
NASA Astrophysics Data System (ADS)
Tang, Hong; Xiao, Nanfeng
2017-01-01
This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.
Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.
Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela
2015-12-01
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Neural network modeling of nonlinear systems based on Volterra series extension of a linear model
NASA Technical Reports Server (NTRS)
Soloway, Donald I.; Bialasiewicz, Jan T.
1992-01-01
A Volterra series approach was applied to the identification of nonlinear systems which are described by a neural network model. A procedure is outlined by which a mathematical model can be developed from experimental data obtained from the network structure. Applications of the results to the control of robotic systems are discussed.
Serendipitous Offline Learning in a Neuromorphic Robot.
Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg
2016-01-01
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
Mulas, Marcello; Waniek, Nicolai; Conradt, Jörg
2016-01-01
After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments. PMID:26924979
Dynamics and control of robot for capturing objects in space
NASA Astrophysics Data System (ADS)
Huang, Panfeng
Space robots are expected to perform intricate tasks in future space services, such as satellite maintenance, refueling, and replacing the orbital replacement unit (ORU). To realize these missions, the capturing operation may not be avoided. Such operations will encounter some challenges because space robots have some unique characteristics unfound on ground-based robots, such as, dynamic singularities, dynamic coupling between manipulator and space base, limited energy supply and working without a fixed base, and so on. In addition, since contacts and impacts may not be avoided during capturing operation. Therefore, dynamics and control problems of space robot for capturing objects are significant research topics if the robots are to be deployed for the space services. A typical servicing operation mainly includes three phases: capturing the object, berthing and docking the object, then repairing the target. Therefore, this thesis will focus on resolving some challenging problems during capturing the object, berthing and docking, and so on. In this thesis, I study and analyze the dynamics and control problems of space robot for capturing objects. This work has potential impact in space robotic applications. I first study the contact and impact dynamics of space robot and objects. I specifically focus on analyzing the impact dynamics and mapping the relationship of influence and speed. Then, I develop the fundamental theory for planning the minimum-collision based trajectory of space robot and designing the configuration of space robot at the moment of capture. To compensate for the attitude of the space base during the capturing approach operation, a new balance control concept which can effectively balance the attitude of the space base using the dynamic couplings is developed. The developed balance control concept helps to understand of the nature of space dynamic coupling, and can be readily applied to compensate or minimize the disturbance to the space base. After capturing the object, the space robot must complete the following two tasks: one is to berth the object, and the other is to re-orientate the attitude of the whole robot system for communication and power supply. Therefore, I propose a method to accomplish these two tasks simultaneously using manipulator motion only. The ultimate goal of space services is to realize the capture and manipulation autonomously. Therefore, I propose an affective approach based on learning human skill to track and capture the objects automatically in space. With human-teaching demonstration, the space robot is able to learn and abstract human tracking and capturing skill using an efficient neural-network learning architecture that combines flexible Cascade Neural Networks with Node Decoupled Extended Kalman Filtering (CNN-NDEKF). The simulation results attest that this approach is useful and feasible in tracking trajectory planning and capturing of space robot. Finally I propose a novel approach based on Genetic Algorithms (GAs) to optimize the approach trajectory of space robots in order to realize effective and stable operations. I complete the minimum-torque path planning in order to save the limited energy in space, and design the minimum jerk trajectory for the stabilization of the space manipulator and its space base. These optimal algorithms are very important and useful for the application of space robot.
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
Pohlmeyer, Eric A.; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline W.; Sanchez, Justin C.
2014-01-01
Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled. PMID:24498055
Hebbian Plasticity in CPG Controllers Facilitates Self-Synchronization for Human-Robot Handshaking.
Jouaiti, Melanie; Caron, Lancelot; Hénaff, Patrick
2018-01-01
It is well-known that human social interactions generate synchrony phenomena which are often unconscious. If the interaction between individuals is based on rhythmic movements, synchronized and coordinated movements will emerge from the social synchrony. This paper proposes a plausible model of plastic neural controllers that allows the emergence of synchronized movements in physical and rhythmical interactions. The controller is designed with central pattern generators (CPG) based on rhythmic Rowat-Selverston neurons endowed with neuronal and synaptic Hebbian plasticity. To demonstrate the interest of the proposed model, the case of handshaking is considered because it is a very common, both physically and socially, but also, a very complex act in the point of view of robotics, neuroscience and psychology. Plastic CPGs controllers are implemented in the joints of a simulated robotic arm that has to learn the frequency and amplitude of an external force applied to its effector, thus reproducing the act of handshaking with a human. Results show that the neural and synaptic Hebbian plasticity are working together leading to a natural and autonomous synchronization between the arm and the external force even if the frequency is changing during the movement. Moreover, a power consumption analysis shows that, by offering emergence of synchronized and coordinated movements, the plasticity mechanisms lead to a significant decrease in the energy spend by the robot actuators thus generating a more adaptive and natural human/robot handshake.
Numerical Nonlinear Robust Control with Applications to Humanoid Robots
2015-07-01
automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
Intelligent manipulation technique for multi-branch robotic systems
NASA Technical Reports Server (NTRS)
Chen, Alexander Y. K.; Chen, Eugene Y. S.
1990-01-01
New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.
Automation and Robotics for Space-Based Systems, 1991
NASA Technical Reports Server (NTRS)
Williams, Robert L., II (Editor)
1992-01-01
The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.
NASA Astrophysics Data System (ADS)
Billard, Aude
2000-10-01
This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.
A computational model of conditioning inspired by Drosophila olfactory system.
Faghihi, Faramarz; Moustafa, Ahmed A; Heinrich, Ralf; Wörgötter, Florentin
2017-03-01
Recent studies have demonstrated that Drosophila melanogaster (briefly Drosophila) can successfully perform higher cognitive processes including second order olfactory conditioning. Understanding the neural mechanism of this behavior can help neuroscientists to unravel the principles of information processing in complex neural systems (e.g. the human brain) and to create efficient and robust robotic systems. In this work, we have developed a biologically-inspired spiking neural network which is able to execute both first and second order conditioning. Experimental studies demonstrated that volume signaling (e.g. by the gaseous transmitter nitric oxide) contributes to memory formation in vertebrates and invertebrates including insects. Based on the existing knowledge of odor encoding in Drosophila, the role of retrograde signaling in memory function, and the integration of synaptic and non-synaptic neural signaling, a neural system is implemented as Simulated fly. Simulated fly navigates in a two-dimensional environment in which it receives odors and electric shocks as sensory stimuli. The model suggests some experimental research on retrograde signaling to investigate neural mechanisms of conditioning in insects and other animals. Moreover, it illustrates a simple strategy to implement higher cognitive capabilities in machines including robots. Copyright © 2016 Elsevier Ltd. All rights reserved.
Methods and Apparatus for Autonomous Robotic Control
NASA Technical Reports Server (NTRS)
Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)
2017-01-01
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Applications of artificial intelligence in safe human-robot interactions.
Najmaei, Nima; Kermani, Mehrdad R
2011-04-01
The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.
Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots
Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers. PMID:26999614
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.
Door detection in images based on learning by components
NASA Astrophysics Data System (ADS)
Cicirelli, Grazia; D'Orazio, Tiziana; Ancona, Nicola
2001-10-01
In this paper we present a vision-based technique for detecting targets of the environment which has to be reached by an autonomous mobile robot during its navigational task. The targets the robot has to reach are the doors of our office building. Color and shape information are used as identifying features for detecting principal components of the door. In fact in images the door can appear of different dimensions depending on the attitude of the robot with respect to the door, therefore detection of the door is performed by detecting its most significant components in the image. Positive and negative examples, in form of image patterns, are manually selected from real images for training two neural classifiers in order to recognize the single components. Each classifier has been realized by a feed-forward neural network with one hidden layer and sigmoid activation function. Moreover for selecting negative examples, relevant for the problem at hand, a bootstrap technique has been used during the training process. Finally the detecting system has been applied to several test real images for evaluating its performance.
An industrial robot singular trajectories planning based on graphs and neural networks
NASA Astrophysics Data System (ADS)
Łęgowski, Adrian; Niezabitowski, Michał
2016-06-01
Singular trajectories are rarely used because of issues during realization. A method of planning trajectories for given set of points in task space with use of graphs and neural networks is presented. In every desired point the inverse kinematics problem is solved in order to derive all possible solutions. A graph of solutions is made. The shortest path is determined to define required nodes in joint space. Neural networks are used to define the path between these nodes.
Sang, Hongqiang; Yang, Chenghao; Liu, Fen; Yun, Jintian; Jin, Guoguang
2016-12-01
It is very important for robotically assisted minimally invasive surgery to achieve a high-precision and smooth motion control. However, the surgical instrument tip will exhibit vibration caused by nonlinear friction and unmodeled dynamics, especially when the surgical robot system is attempting low-speed, fine motion. A fuzzy neural network sliding mode controller (FNNSMC) is proposed to suppress vibration of the surgical robotic system. Nonlinear friction and modeling uncertainties are compensated by a Stribeck model, a radial basis function (RBF) neural network and a fuzzy system, respectively. Simulations and experiments were performed on a 3 degree-of-freedom (DOF) minimally invasive surgical robot. The results demonstrate that the FNNSMC is effective and can suppress vibrations at the surgical instrument tip. The proposed FNNSMC can provide a robust performance and suppress the vibrations at the surgical instrument tip, which can enhance the quality and security of surgical procedures. Copyright © 2016 John Wiley & Sons, Ltd.
Reach and grasp by people with tetraplegia using a neurally controlled robotic arm
Hochberg, Leigh R.; Bacher, Daniel; Jarosiewicz, Beata; Masse, Nicolas Y.; Simeral, John D.; Vogel, Joern; Haddadin, Sami; Liu, Jie; Cash, Sydney S.; van der Smagt, Patrick; Donoghue, John P.
2012-01-01
Paralysis following spinal cord injury (SCI), brainstem stroke, amyotrophic lateral sclerosis (ALS) and other disorders can disconnect the brain from the body, eliminating the ability to carry out volitional movements. A neural interface system (NIS)1–5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with longstanding tetraplegia can use an NIS to move and click a computer cursor and to control physical devices6–8. Able-bodied monkeys have used an NIS to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here, we demonstrate the ability of two people with long-standing tetraplegia to use NIS-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. While robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after CNS injury, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals. PMID:22596161
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
Neural networks for satellite remote sensing and robotic sensor interpretation
NASA Astrophysics Data System (ADS)
Martens, Siegfried
Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
Neural-Network Control Of Prosthetic And Robotic Hands
NASA Technical Reports Server (NTRS)
Buckley, Theresa M.
1991-01-01
Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.
Interacting With Robots to Investigate the Bases of Social Interaction.
Sciutti, Alessandra; Sandini, Giulio
2017-12-01
Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.
Barhen, J; Toomarian, N; Protopopescu, V
1987-12-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Toomarian, N.; Protopopescu, V.
1987-01-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
NASA Astrophysics Data System (ADS)
Pini, Giovanni; Tuci, Elio
2008-06-01
In biology/psychology, the capability of natural organisms to learn from the observation/interaction with conspecifics is referred to as social learning. Roboticists have recently developed an interest in social learning, since it might represent an effective strategy to enhance the adaptivity of a team of autonomous robots. In this study, we show that a methodological approach based on artifcial neural networks shaped by evolutionary computation techniques can be successfully employed to synthesise the individual and social learning mechanisms for robots required to learn a desired action (i.e. phototaxis or antiphototaxis).
Yan, Xiaodan
2010-01-01
The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.
Higher-order neural network software for distortion invariant object recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Spirkovska, Lilly
1991-01-01
The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.
Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments
Rhodes, Paul A.; Anderson, Todd O.
2012-01-01
To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772
Contreras-Vidal, Jose L.; Grossman, Robert G.
2013-01-01
In this communication, a translational clinical brain-machine interface (BMI) roadmap for an EEG-based BMI to a robotic exoskeleton (NeuroRex) is presented. This multi-faceted project addresses important engineering and clinical challenges: It addresses the validation of an intelligent, self-balancing, robotic lower-body and trunk exoskeleton (Rex) augmented with EEG-based BMI capabilities to interpret user intent to assist a mobility-impaired person to walk independently. The goal is to improve the quality of life and health status of wheelchair-bounded persons by enabling standing and sitting, walking and backing, turning, ascending and descending stairs/curbs, and navigating sloping surfaces in a variety of conditions without the need for additional support or crutches. PMID:24110003
IR wireless cluster synapses of HYDRA very large neural networks
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas
2008-04-01
RF/IR wireless (virtual) synapses are critical components of HYDRA (Hyper-Distributed Robotic Autonomy) neural networks, already discussed in two earlier papers. The HYDRA network has the potential to be very large, up to 10 11-neurons and 10 18-synapses, based on already established technologies (cellular RF telephony and IR-wireless LANs). It is organized into almost fully connected IR-wireless clusters. The HYDRA neurons and synapses are very flexible, simple, and low-cost. They can be modified into a broad variety of biologically-inspired brain-like computing capabilities. In this third paper, we focus on neural hardware in general, and on IR-wireless synapses in particular. Such synapses, based on LED/LD-connections, dominate the HYDRA neural cluster.
A physical model of sensorimotor interactions during locomotion
NASA Astrophysics Data System (ADS)
Klein, Theresa J.; Lewis, M. Anthony
2012-08-01
In this paper, we describe the development of a bipedal robot that models the neuromuscular architecture of human walking. The body is based on principles derived from human muscular architecture, using muscles on straps to mimic agonist/antagonist muscle action as well as bifunctional muscles. Load sensors in the straps model Golgi tendon organs. The neural architecture is a central pattern generator (CPG) composed of a half-center oscillator combined with phase-modulated reflexes that is simulated using a spiking neural network. We show that the interaction between the reflex system, body dynamics and CPG results in a walking cycle that is entrained to the dynamics of the system. We also show that the CPG helped stabilize the gait against perturbations relative to a purely reflexive system, and compared the joint trajectories to human walking data. This robot represents a complete physical, or ‘neurorobotic’, model of the system, demonstrating the usefulness of this type of robotics research for investigating the neurophysiological processes underlying walking in humans and animals.
Peralta, Emmanuel; Vargas, Héctor; Hermosilla, Gabriel
2018-01-01
Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks. PMID:29495338
Face Generation Using Emotional Regions for Sensibility Robot
NASA Astrophysics Data System (ADS)
Gotoh, Minori; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori
We think that psychological interaction is necessary for smooth communication between robots and people. One way to psychologically interact with others is through facial expressions. Facial expressions are very important for communication because they show true emotions and feelings. The ``Ifbot'' robot communicates with people by considering its own ``emotions''. Ifbot has many facial expressions to communicate enjoyment. We developed a method for generating facial expressions based on human subjective judgements mapping Ifbot's facial expressions to its emotions. We first created Ifbot's emotional space to map its facial expressions. We applied a five-layer auto-associative neural network to the space. We then subjectively evaluated the emotional space and created emotional regions based on the results. We generated emotive facial expressions using the emotional regions.
Knowledge assistant for robotic environmental characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feddema, J.; Rivera, J.; Tucker, S.
1996-08-01
A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and postanalysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neural network,more » and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g., estimated dimensions, weight, material composition, etc.) are displayed in the world model. This report highlights the major components of this system.« less
Koller, Jeffrey R; Remy, C David; Ferris, Daniel P
2018-05-25
Controllers for assistive robotic devices can be divided into two main categories: controllers using neural signals and controllers using mechanically intrinsic signals. Both approaches are prevalent in research devices, but a direct comparison between the two could provide insight into their relative advantages and disadvantages. We studied subjects walking with robotic ankle exoskeletons using two different control modes: dynamic gain proportional myoelectric control based on soleus muscle activity (neural signal), and timing-based mechanically intrinsic control based on gait events (mechanically intrinsic signal). We hypothesized that subjects would have different measures of metabolic work rate between the two controllers as we predicted subjects would use each controller in a unique manner due to one being dependent on muscle recruitment and the other not. The two controllers had the same average actuation signal as we used the control signals from walking with the myoelectric controller to shape the mechanically intrinsic control signal. The difference being the myoelectric controller allowed step-to-step variation in the actuation signals controlled by the user's soleus muscle recruitment while the timing-based controller had the same actuation signal with each step regardless of muscle recruitment. We observed no statistically significant difference in metabolic work rate between the two controllers. Subjects walked with 11% less soleus activity during mid and late stance and significantly less peak soleus recruitment when using the timing-based controller than when using the myoelectric controller. While walking with the myoelectric controller, subjects walked with significantly higher average positive and negative total ankle power compared to walking with the timing-based controller. We interpret the reduced ankle power and muscle activity with the timing-based controller relative to the myoelectric controller to result from greater slacking effects. Subjects were able to be less engaged on a muscle level when using a controller driven by mechanically intrinsic signals than when using a controller driven by neural signals, but this had no affect on their metabolic work rate. These results suggest that the type of controller (neural vs. mechanical) is likely to affect how individuals use robotic exoskeletons for therapeutic rehabilitation or human performance augmentation.
NASA Astrophysics Data System (ADS)
Ellery, A.
Since the remarkable British Interplanetary Society starship study of the late 1970s - Daedalus - there have been significant developments in the areas of artificial intelligence and robotics. These will be critical technologies for any starship as indeed they are for the current generation of exploratory spacecraft and in-situ planetary robotic explorers. Although early visions of truly intelligent robots have yet to materialize (reasons for which will be outlined), there are nonetheless revolutionary developments which have attempted to address at least some of these earlier unperceived deficiencies. The current state of the art comprises a number of separate strands of research which provide components of robotic intelligence though no over- arching approach has been forthcoming. The first question to be considered is the level of intelligent functionality required to support a long-duration starship mission. This will, at a minimum, need to be extensive imposed by the requirement for complex reconfigurability and repair. The second question concerns the tools that we have at our disposal to implement the required intelligent functions of the starship. These are based on two very different approaches - good old-fashioned artificial intelligence (GOFAI) based on logical theorem-proving and knowledge-encoding recently augmented by modal, temporal, circumscriptive and fuzzy logics to address the well-known “frame problem”; and the more recent soft computing approaches based on artificial neural networks, evolutionary algorithms and immunity models and their variants to implement learning. The former has some flight heritage through the Remote Agent architecture whilst the latter has yet to be deployed on any space mission. However, the notion of reconfigurable hardware of recent interest in the space community warrants the use of evolutionary algorithms and neural networks implemented on field programmable gate array technology, blurring the distinction between hardware and software. The primary question in space engineering has traditionally been one of predictability and controllability which online learning compromises. A further factor to be accounted for is the notion that intelligence is derived primarily from robot-environment interaction which stresses the sensory and actuation capabilities (exemplified by the behavioural or situated robotics paradigm). One major concern is whether the major deficiency of current methods in terms of lack of scalability can be overcome using a highly distributed approach rather than the hierarchical approach suggested by the NASREM architecture. It is contended here that a mixed solution will be required where a priori programming is augmented by a posteriori learning resembling the biological distinction between fixed genetically inherited and learned neurally implemented behaviour in animals. In particular, a biomimetic approach is proferred which exploits the neural processes and architecture of the human brain through the use of forward models which attempts to marry the conflicting requirements of learning with predictability. Some small-scale efforts in this direction will be outlined.
Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Gyeong-Moon Park; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Yoo, Yong-Ho; Park, Gyeong-Moon; Kim, Jong-Hwan; Kim, Deok-Hwa
2018-06-01
Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
A neuro-inspired spike-based PID motor controller for multi-motor robots with low cost FPGAs.
Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J; Paz-Vicente, Rafael; Civit-Balcells, Anton
2012-01-01
In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control.
A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs
Jimenez-Fernandez, Angel; Jimenez-Moreno, Gabriel; Linares-Barranco, Alejandro; Dominguez-Morales, Manuel J.; Paz-Vicente, Rafael; Civit-Balcells, Anton
2012-01-01
In this paper we present a neuro-inspired spike-based close-loop controller written in VHDL and implemented for FPGAs. This controller has been focused on controlling a DC motor speed, but only using spikes for information representation, processing and DC motor driving. It could be applied to other motors with proper driver adaptation. This controller architecture represents one of the latest layers in a Spiking Neural Network (SNN), which implements a bridge between robotics actuators and spike-based processing layers and sensors. The presented control system fuses actuation and sensors information as spikes streams, processing these spikes in hard real-time, implementing a massively parallel information processing system, through specialized spike-based circuits. This spike-based close-loop controller has been implemented into an AER platform, designed in our labs, that allows direct control of DC motors: the AER-Robot. Experimental results evidence the viability of the implementation of spike-based controllers, and hardware synthesis denotes low hardware requirements that allow replicating this controller in a high number of parallel controllers working together to allow a real-time robot control. PMID:22666004
Event detection and localization for small mobile robots using reservoir computing.
Antonelo, E A; Schrauwen, B; Stroobandt, D
2008-08-01
Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.
Self discovery enables robot social cognition: are you my teacher?
Kaipa, Krishnanand N; Bongard, Josh C; Meltzoff, Andrew N
2010-01-01
Infants exploit the perception that others are 'like me' to bootstrap social cognition (Meltzoff, 2007a). This paper demonstrates how the above theory can be instantiated in a social robot that uses itself as a model to recognize structural similarities with other robots; this thereby enables the student to distinguish between appropriate and inappropriate teachers. This is accomplished by the student robot first performing self-discovery, a phase in which it uses actuation-perception relationships to infer its own structure. Second, the student models a candidate teacher using a vision-based active learning approach to create an approximate physical simulation of the teacher. Third, the student determines that the teacher is structurally similar (but not necessarily visually similar) to itself if it can find a neural controller that allows its self model (created in the first phase) to reproduce the perceived motion of the teacher model (created in the second phase). Fourth, the student uses the neural controller (created in the third phase) to move, resulting in imitation of the teacher. Results with a physical student robot and two physical robot teachers demonstrate the effectiveness of this approach. The generalizability of the proposed model allows it to be used over variations in the demonstrator: The student robot would still be able to imitate teachers of different sizes and at different distances from itself, as well as different positions in its field of view, because change in the interrelations of the teacher's body parts are used for imitation, rather than absolute geometric properties. Copyright © 2010 Elsevier Ltd. All rights reserved.
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.
Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100
Proceedings of the 1987 IEEE international conference on systems, man, and cybernetics. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
This book contains the proceedings of the IEE international conference on systems Man, and cybernetics. Topics include the following: robotics; knowledge base simulation; software systems, image and pattern recognition; neural networks; and image processing.
Self-organized adaptation of a simple neural circuit enables complex robot behaviour
NASA Astrophysics Data System (ADS)
Steingrube, Silke; Timme, Marc; Wörgötter, Florentin; Manoonpong, Poramate
2010-03-01
Controlling sensori-motor systems in higher animals or complex robots is a challenging combinatorial problem, because many sensory signals need to be simultaneously coordinated into a broad behavioural spectrum. To rapidly interact with the environment, this control needs to be fast and adaptive. Present robotic solutions operate with limited autonomy and are mostly restricted to few behavioural patterns. Here we introduce chaos control as a new strategy to generate complex behaviour of an autonomous robot. In the presented system, 18 sensors drive 18 motors by means of a simple neural control circuit, thereby generating 11 basic behavioural patterns (for example, orienting, taxis, self-protection and various gaits) and their combinations. The control signal quickly and reversibly adapts to new situations and also enables learning and synaptic long-term storage of behaviourally useful motor responses. Thus, such neural control provides a powerful yet simple way to self-organize versatile behaviours in autonomous agents with many degrees of freedom.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra
2010-01-01
Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777
Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco
2009-01-01
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
Development and Training of a Neural Controller for Hind Leg Walking in a Dog Robot
Hunt, Alexander; Szczecinski, Nicholas; Quinn, Roger
2017-01-01
Animals dynamically adapt to varying terrain and small perturbations with remarkable ease. These adaptations arise from complex interactions between the environment and biomechanical and neural components of the animal's body and nervous system. Research into mammalian locomotion has resulted in several neural and neuro-mechanical models, some of which have been tested in simulation, but few “synthetic nervous systems” have been implemented in physical hardware models of animal systems. One reason is that the implementation into a physical system is not straightforward. For example, it is difficult to make robotic actuators and sensors that model those in the animal. Therefore, even if the sensorimotor circuits were known in great detail, those parameters would not be applicable and new parameter values must be found for the network in the robotic model of the animal. This manuscript demonstrates an automatic method for setting parameter values in a synthetic nervous system composed of non-spiking leaky integrator neuron models. This method works by first using a model of the system to determine required motor neuron activations to produce stable walking. Parameters in the neural system are then tuned systematically such that it produces similar activations to the desired pattern determined using expected sensory feedback. We demonstrate that the developed method successfully produces adaptive locomotion in the rear legs of a dog-like robot actuated by artificial muscles. Furthermore, the results support the validity of current models of mammalian locomotion. This research will serve as a basis for testing more complex locomotion controllers and for testing specific sensory pathways and biomechanical designs. Additionally, the developed method can be used to automatically adapt the neural controller for different mechanical designs such that it could be used to control different robotic systems. PMID:28420977
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
NASA Astrophysics Data System (ADS)
Lowe, Robert; Ziemke, Tom
2010-09-01
The somatic marker hypothesis (SMH) posits that the role of emotions and mental states in decision-making manifests through bodily responses to stimuli of import to the organism's welfare. The Iowa Gambling Task (IGT), proposed by Bechara and Damasio in the mid-1990s, has provided the major source of empirical validation to the role of somatic markers in the service of flexible and cost-effective decision-making in humans. In recent years the IGT has been the subject of much criticism concerning: (1) whether measures of somatic markers reveal that they are important for decision-making as opposed to behaviour preparation; (2) the underlying neural substrate posited as critical to decision-making of the type relevant to the task; and (3) aspects of the methodological approach used, particularly on the canonical version of the task. In this paper, a cognitive robotics methodology is proposed to explore a dynamical systems approach as it applies to the neural computation of reward-based learning and issues concerning embodiment. This approach is particularly relevant in light of a strongly emerging alternative hypothesis to the SMH, the reversal learning hypothesis, which links, behaviourally and neurocomputationally, a number of more or less complex reward-based decision-making tasks, including the 'A-not-B' task - already subject to dynamical systems investigations with a focus on neural activation dynamics. It is also suggested that the cognitive robotics methodology may be used to extend systematically the IGT benchmark to more naturalised, but nevertheless controlled, settings that might better explore the extent to which the SMH, and somatic states per se, impact on complex decision-making.
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2006-12-01
A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Kinesthetic Feedback During 2DOF Wrist Movements via a Novel MR-Compatible Robot.
Erwin, Andrew; O'Malley, Marcia K; Ress, David; Sergi, Fabrizio
2017-09-01
We demonstrate the interaction control capabilities of the MR-SoftWrist, a novel MR-compatible robot capable of applying accurate kinesthetic feedback to wrist pointing movements executed during fMRI. The MR-SoftWrist, based on a novel design that combines parallel piezoelectric actuation with compliant force feedback, is capable of delivering 1.5 N [Formula: see text] of torque to the wrist of an interacting subject about the flexion/extension and radial/ulnar deviation axes. The robot workspace, defined by admissible wrist rotation angles, fully includes a circle with a 20 deg radius. Via dynamic characterization, we demonstrate capability for transparent operation with low (10% of maximum torque output) backdrivability torques at nominal speeds. Moreover, we demonstrate a 5.5 Hz stiffness control bandwidth for a 14 dB range of virtual stiffness values, corresponding to 25%-125% of the device's physical reflected stiffness in the nominal configuration. We finally validate the possibility of operation during fMRI via a case study involving one healthy subject. Our validation experiment demonstrates the capability of the device to apply kinesthetic feedback to elicit distinguishable kinetic and neural responses without significant degradation of image quality or task-induced head movements. With this study, we demonstrate the feasibility of MR-compatible devices like the MR-SoftWrist to be used in support of motor control experiments investigating wrist pointing under robot-applied force fields. Such future studies may elucidate fundamental neural mechanisms enabling robot-assisted motor skill learning, which is crucial for robot-aided neurorehabilitation.
Online adaptive neural control of a robotic lower limb prosthesis
NASA Astrophysics Data System (ADS)
Spanias, J. A.; Simon, A. M.; Finucane, S. B.; Perreault, E. J.; Hargrove, L. J.
2018-02-01
Objective. The purpose of this study was to develop and evaluate an adaptive intent recognition algorithm that continuously learns to incorporate a lower limb amputee’s neural information (acquired via electromyography (EMG)) as they ambulate with a robotic leg prosthesis. Approach. We present a powered lower limb prosthesis that was configured to acquire the user’s neural information and kinetic/kinematic information from embedded mechanical sensors, and identify and respond to the user’s intent. We conducted an experiment with eight transfemoral amputees over multiple days. EMG and mechanical sensor data were collected while subjects using a powered knee/ankle prosthesis completed various ambulation activities such as walking on level ground, stairs, and ramps. Our adaptive intent recognition algorithm automatically transitioned the prosthesis into the different locomotion modes and continuously updated the user’s model of neural data during ambulation. Main results. Our proposed algorithm accurately and consistently identified the user’s intent over multiple days, despite changing neural signals. The algorithm incorporated 96.31% [0.91%] (mean, [standard error]) of neural information across multiple experimental sessions, and outperformed non-adaptive versions of our algorithm—with a 6.66% [3.16%] relative decrease in error rate. Significance. This study demonstrates that our adaptive intent recognition algorithm enables incorporation of neural information over long periods of use, allowing assistive robotic devices to accurately respond to the user’s intent with low error rates.
NASA Astrophysics Data System (ADS)
Nagai, Yukie; Asada, Minoru; Hosoda, Koh
This paper presents a developmental learning model for joint attention between a robot and a human caregiver. The basic idea of the proposed model comes from the insight of the cognitive developmental science that the development can help the task learning. The model consists of a learning mechanism based on evaluation and two kinds of developmental mechanisms: a robot's development and a caregiver's one. The former means that the sensing and the actuating capabilities of the robot change from immaturity to maturity. On the other hand, the latter is defined as a process that the caregiver changes the task from easy situation to difficult one. These two developments are triggered by the learning progress. The experimental results show that the proposed model can accelerate the learning of joint attention owing to the caregiver's development. Furthermore, it is observed that the robot's development can improve the final task performance by reducing the internal representation in the learned neural network. The mechanisms that bring these effects to the learning are analyzed in line with the cognitive developmental science.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
Neural-Learning-Based Telerobot Control With Guaranteed Performance.
Yang, Chenguang; Wang, Xinyu; Cheng, Long; Ma, Hongbin
2017-10-01
In this paper, a neural networks (NNs) enhanced telerobot control system is designed and tested on a Baxter robot. Guaranteed performance of the telerobot control system is achieved at both kinematic and dynamic levels. At kinematic level, automatic collision avoidance is achieved by the control design at the kinematic level exploiting the joint space redundancy, thus the human operator would be able to only concentrate on motion of robot's end-effector without concern on possible collision. A posture restoration scheme is also integrated based on a simulated parallel system to enable the manipulator restore back to the natural posture in the absence of obstacles. At dynamic level, adaptive control using radial basis function NNs is developed to compensate for the effect caused by the internal and external uncertainties, e.g., unknown payload. Both the steady state and the transient performance are guaranteed to satisfy a prescribed performance requirement. Comparative experiments have been performed to test the effectiveness and to demonstrate the guaranteed performance of the proposed methods.
Adaptive neural network motion control of manipulators with experimental evaluations.
Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.
Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations
Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910
Multirobot Lunar Excavation and ISRU Using Artificial-Neural-Tissue Controllers
NASA Astrophysics Data System (ADS)
Thangavelautham, Jekanthan; Smith, Alexander; Abu El Samid, Nader; Ho, Alexander; Boucher, Dale; Richard, Jim; D'Eleuterio, Gabriele M. T.
2008-01-01
Automation of site preparation and resource utilization on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. Such multirobot autonomous systems would require limited human support infrastructure, complement necessary manned operations and reduce overall mission risk. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot excavation tasks. An ANT approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to `breed' controllers for the task at hand in simulation and the fittest controllers are transferred onto hardware for further validation and testing. ANT facilitates `machine creativity', with the emergence of novel functionality through a process of self-organized task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot excavation task in which teams of robots with no explicit supervision can successfully avoid obstacles, interpret excavation blueprints, perform layered digging, avoid burying or trapping other robots and clear/maintain digging routes.
Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning
NASA Astrophysics Data System (ADS)
Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu
Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.
Information-theoretic decomposition of embodied and situated systems.
Da Rold, Federico
2018-07-01
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Soft computing-based terrain visual sensing and data fusion for unmanned ground robotic systems
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir
2006-05-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
Reinforcement learning for a biped robot based on a CPG-actor-critic method.
Nakamura, Yutaka; Mori, Takeshi; Sato, Masa-aki; Ishii, Shin
2007-08-01
Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes.
Speeding up the learning of robot kinematics through function decomposition.
Ruiz de Angulo, Vicente; Torras, Carme
2005-11-01
The main drawback of using neural networks or other example-based learning procedures to approximate the inverse kinematics (IK) of robot arms is the high number of training samples (i.e., robot movements) required to attain an acceptable precision. We propose here a trick, valid for most industrial robots, that greatly reduces the number of movements needed to learn or relearn the IK to a given accuracy. This trick consists in expressing the IK as a composition of learnable functions, each having half the dimensionality of the original mapping. Off-line and on-line training schemes to learn these component functions are also proposed. Experimental results obtained by using nearest neighbors and parameterized self-organizing map, with and without the decomposition, show that the time savings granted by the proposed scheme grow polynomially with the precision required.
Adaptive Control Strategies for Interlimb Coordination in Legged Robots: A Review
Aoi, Shinya; Manoonpong, Poramate; Ambe, Yuichi; Matsuno, Fumitoshi; Wörgötter, Florentin
2017-01-01
Walking animals produce adaptive interlimb coordination during locomotion in accordance with their situation. Interlimb coordination is generated through the dynamic interactions of the neural system, the musculoskeletal system, and the environment, although the underlying mechanisms remain unclear. Recently, investigations of the adaptation mechanisms of living beings have attracted attention, and bio-inspired control systems based on neurophysiological findings regarding sensorimotor interactions are being developed for legged robots. In this review, we introduce adaptive interlimb coordination for legged robots induced by various factors (locomotion speed, environmental situation, body properties, and task). In addition, we show characteristic properties of adaptive interlimb coordination, such as gait hysteresis and different time-scale adaptations. We also discuss the underlying mechanisms and control strategies to achieve adaptive interlimb coordination and the design principle for the control system of legged robots. PMID:28878645
Cyr, André; Boukadoum, Mounir
2013-03-01
This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.
Social cognitive neuroscience and humanoid robotics.
Chaminade, Thierry; Cheng, Gordon
2009-01-01
We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798
Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro
2014-12-01
Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one's past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the "now," multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.
Fast Dynamical Coupling Enhances Frequency Adaptation of Oscillators for Robotic Locomotion Control
Nachstedt, Timo; Tetzlaff, Christian; Manoonpong, Poramate
2017-01-01
Rhythmic neural signals serve as basis of many brain processes, in particular of locomotion control and generation of rhythmic movements. It has been found that specific neural circuits, named central pattern generators (CPGs), are able to autonomously produce such rhythmic activities. In order to tune, shape and coordinate the produced rhythmic activity, CPGs require sensory feedback, i.e., external signals. Nonlinear oscillators are a standard model of CPGs and are used in various robotic applications. A special class of nonlinear oscillators are adaptive frequency oscillators (AFOs). AFOs are able to adapt their frequency toward the frequency of an external periodic signal and to keep this learned frequency once the external signal vanishes. AFOs have been successfully used, for instance, for resonant tuning of robotic locomotion control. However, the choice of parameters for a standard AFO is characterized by a trade-off between the speed of the adaptation and its precision and, additionally, is strongly dependent on the range of frequencies the AFO is confronted with. As a result, AFOs are typically tuned such that they require a comparably long time for their adaptation. To overcome the problem, here, we improve the standard AFO by introducing a novel adaptation mechanism based on dynamical coupling strengths. The dynamical adaptation mechanism enhances both the speed and precision of the frequency adaptation. In contrast to standard AFOs, in this system, the interplay of dynamics on short and long time scales enables fast as well as precise adaptation of the oscillator for a wide range of frequencies. Amongst others, a very natural implementation of this mechanism is in terms of neural networks. The proposed system enables robotic applications which require fast retuning of locomotion control in order to react to environmental changes or conditions. PMID:28377710
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
Performance-based robotic assistance during rhythmic arm exercises.
Leconte, Patricia; Ronsse, Renaud
2016-09-13
Rhythmic and discrete upper-limb movements are two fundamental motor primitives controlled by different neural pathways, at least partially. After stroke, both primitives can be impaired. Both conventional and robot-assisted therapies mainly train discrete functional movements like reaching and grasping. However, if the movements form two distinct neural and functional primitives, both should be trained to recover the complete motor repertoire. Recent studies show that rhythmic movements tend to be less impaired than discrete ones, so combining both movement types in therapy could support the execution of movements with a higher degree of impairment by movements that are performed more stably. A new performance-based assistance method was developed to train rhythmic movements with a rehabilitation robot. The algorithm uses the assist-as-needed paradigm by independently assessing and assisting movement features of smoothness, velocity, and amplitude. The method relies on different building blocks: (i) an adaptive oscillator captures the main movement harmonic in state variables, (ii) custom metrics measure the movement performance regarding the three features, and (iii) adaptive forces assist the patient. The patient is encouraged to improve performance regarding these three features with assistance forces computed in parallel to each other. The method was tested with simulated jerky signals and a pilot experiment with two stroke patients, who were instructed to make circular movements with an end-effector robot with assistance during half of the trials. Simulation data reveal sensitivity of the metrics for assessing the features while limiting interference between them. The assistance's effectiveness with stroke patients is established since it (i) adapts to the patient's real-time performance, (ii) improves patient motor performance, and (iii) does not lead the patient to slack. The smoothness assistance was by far the most used by both patients, while it provided no active mechanical work to the patient on average. Our performance-based assistance method for training rhythmic movements is a viable candidate to complement robot-assisted upper-limb therapies for training a larger motor repertoire.
Visual terrain mapping for traversable path planning of mobile robots
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Amrani, Rachida; Tunstel, Edward W.
2004-10-01
In this paper, we have primarily discussed technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain visual clues. The Kalman Filtering technique is applied for aggregative fusion of sub-terrain assessment results. The last two terrain classifiers are shown to have remarkable capability for terrain traversability assessment of natural terrains. We have conducted a comparative performance evaluation of all three terrain classifiers and presented the results in this paper.
Evolutionary Design of a Robotic Material Defect Detection System
NASA Technical Reports Server (NTRS)
Ballard, Gary; Howsman, Tom; Craft, Mike; ONeil, Daniel; Steincamp, Jim; Howell, Joe T. (Technical Monitor)
2002-01-01
During the post-flight inspection of SSME engines, several inaccessible regions must be disassembled to inspect for defects such as cracks, scratches, gouges, etc. An improvement to the inspection process would be the design and development of very small robots capable of penetrating these inaccessible regions and detecting the defects. The goal of this research was to utilize an evolutionary design approach for the robotic detection of these types of defects. A simulation and visualization tool was developed prior to receiving the hardware as a development test bed. A small, commercial off-the-shelf (COTS) robot was selected from several candidates as the proof of concept robot. The basic approach to detect the defects was to utilize Cadmium Sulfide (CdS) sensors to detect changes in contrast of an illuminated surface. A neural network, optimally designed utilizing a genetic algorithm, was employed to detect the presence of the defects (cracks). By utilization of the COTS robot and US sensors, the research successfully demonstrated that an evolutionarily designed neural network can detect the presence of surface defects.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
Adaptation to a cortex controlled robot attached at the pelvis and engaged during locomotion in rats
Song, Weiguo; Giszter, Simon F.
2011-01-01
Brain Machine Interfaces (BMIs) should ideally show robust adaptation of the BMI across different tasks and daily activities. Most BMIs have used over-practiced tasks. Little is known about BMIs in dynamic environments. How are mechanically body-coupled BMIs integrated into ongoing rhythmic dynamics, e.g., in locomotion? To examine this we designed a novel BMI using neural discharge in the hindlimb/trunk motor cortex in rats during locomotion to control a robot attached at the pelvis. We tested neural adaptation when rats experienced (a) control locomotion, (b) ‘simple elastic load’ (a robot load on locomotion without any BMI neural control) and (c) ‘BMI with elastic load’ (in which the robot loaded locomotion and a BMI neural control could counter this load). Rats significantly offset applied loads with the BMI while preserving more normal pelvic height compared to load alone. Adaptation occurred over about 100–200 step cycles in a trial. Firing rates increased in both the loaded conditions compared to baseline. Mean phases of cells’ discharge in the step cycle shifted significantly between BMI and the simple load condition. Over time more BMI cells became positively correlated with the external force and modulated more deeply, and neurons’ network correlations on a 100ms timescale increased. Loading alone showed none of these effects. The BMI neural changes of rate and force correlations persisted or increased over repeated trials. Our results show that rats have the capacity to use motor adaptation and motor learning to fairly rapidly engage hindlimb/trunk coupled BMIs in their locomotion. PMID:21414932
On the Role of Sensory Feedbacks in Rowat–Selverston CPG to Improve Robot Legged Locomotion
Amrollah, Elmira; Henaff, Patrick
2010-01-01
This paper presents the use of Rowat and Selverston-type of central pattern generator (CPG) to control locomotion. It focuses on the role of afferent exteroceptive and proprioceptive signals in the dynamic phase synchronization in CPG legged robots. The sensori-motor neural network architecture is evaluated to control a two-joint planar robot leg that slips on a rail. Then, the closed loop between the CPG and the mechanical system allows to study the modulation of rhythmic patterns and the effect of the sensing loop via sensory neurons during the locomotion task. Firstly simulations show that the proposed architecture easily allows to modulate rhythmic patterns of the leg, and therefore the velocity of the robot. Secondly, simulations show that sensori-feedbacks from foot/ground contact of the leg make the hip velocity smoother and larger. The results show that the Rowat–Selverston-type CPG with sensory feedbacks is an effective choice for building adaptive neural CPGs for legged robots. PMID:21228904
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
A neuro-collision avoidance strategy for robot manipulators
NASA Technical Reports Server (NTRS)
Onema, Joel P.; Maclaunchlan, Robert A.
1992-01-01
The area of collision avoidance and path planning in robotics has received much attention in the research community. Our study centers on a combination of an artificial neural network paradigm with a motion planning strategy that insures safe motion of the Articulated Two-Link Arm with Scissor Hand System relative to an object. Whenever an obstacle is encountered, the arm attempts to slide along the obstacle surface, thereby avoiding collision by means of the local tangent strategy and its artificial neural network implementation. This combination compensates the inverse kinematics of a robot manipulator. Simulation results indicate that a neuro-collision avoidance strategy can be achieved by means of a learning local tangent method.
Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J
2008-05-01
Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.
Chang, Yeong-Chan
2005-12-01
This paper addresses the problem of designing adaptive fuzzy-based (or neural network-based) robust controls for a large class of uncertain nonlinear time-varying systems. This class of systems can be perturbed by plant uncertainties, unmodeled perturbations, and external disturbances. Nonlinear H(infinity) control technique incorporated with adaptive control technique and VSC technique is employed to construct the intelligent robust stabilization controller such that an H(infinity) control is achieved. The problem of the robust tracking control design for uncertain robotic systems is employed to demonstrate the effectiveness of the developed robust stabilization control scheme. Therefore, an intelligent robust tracking controller for uncertain robotic systems in the presence of high-degree uncertainties can easily be implemented. Its solution requires only to solve a linear algebraic matrix inequality and a satisfactorily transient and asymptotical tracking performance is guaranteed. A simulation example is made to confirm the performance of the developed control algorithms.
Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.
Resquín, Francisco; Gonzalez-Vargas, Jose; Ibáñez, Jaime; Brunetti, Fernando; Pons, José Luis
2016-01-01
Hybrid robotic systems represent a novel research field, where functional electrical stimulation (FES) is combined with a robotic device for rehabilitation of motor impairment. Under this approach, the design of robust FES controllers still remains an open challenge. In this work, we aimed at developing a learning FES controller to assist in the performance of reaching movements in a simple hybrid robotic system setting. We implemented a Feedback Error Learning (FEL) control strategy consisting of a feedback PID controller and a feedforward controller based on a neural network. A passive exoskeleton complemented the FES controller by compensating the effects of gravity. We carried out experiments with healthy subjects to validate the performance of the system. Results show that the FEL control strategy is able to adjust the FES intensity to track the desired trajectory accurately without the need of a previous mathematical model. PMID:27990245
Integrating sensorimotor systems in a robot model of cricket behavior
NASA Astrophysics Data System (ADS)
Webb, Barbara H.; Harrison, Reid R.
2000-10-01
The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.
Scano, A; Chiavenna, A; Caimmi, M; Malosio, M; Tosatti, L M; Molteni, F
2017-07-01
Robot-assisted training is a widely used technique to promote motor re-learning on post-stroke patients that suffer from motor impairment. While it is commonly accepted that robot-based therapies are potentially helpful, strong insights about their efficacy are still lacking. The motor re-learning process may act on muscular synergies, which are groups of co-activating muscles that, being controlled as a synergic group, allow simplifying the problem of motor control. In fact, by coordinating a reduced amount of neural signals, complex motor patterns can be elicited. This paper aims at analyzing the effects of robot assistance during 3D-reaching movements in the framework of muscular synergies. 5 healthy people and 3 neurological patients performed free and robot-assisted reaching movements at 2 different speeds (slow and quasi-physiological). EMG recordings were used to extract muscular synergies. Results indicate that the interaction with the robot very slightly alters healthy people patterns but, on the contrary, it may promote the emergency of physiological-like synergies on neurological patients.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro
2018-01-01
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521
Learning to recognize objects on the fly: a neurally based dynamic field approach.
Faubel, Christian; Schöner, Gregor
2008-05-01
Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.
Gao, Hui; Song, Yongduan; Wen, Changyun
In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.
NASA Technical Reports Server (NTRS)
Thakoor, Anil
1990-01-01
Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.
Liu, Zhi; Chen, Ci; Zhang, Yun; Chen, C L P
2015-03-01
To achieve an excellent dual-arm coordination of the humanoid robot, it is essential to deal with the nonlinearities existing in the system dynamics. The literatures so far on the humanoid robot control have a common assumption that the problem of output hysteresis could be ignored. However, in the practical applications, the output hysteresis is widely spread; and its existing limits the motion/force performances of the robotic system. In this paper, an adaptive neural control scheme, which takes the unknown output hysteresis and computational efficiency into account, is presented and investigated. In the controller design, the prior knowledge of system dynamics is assumed to be unknown. The motion error is guaranteed to converge to a small neighborhood of the origin by Lyapunov's stability theory. Simultaneously, the internal force is kept bounded and its error can be made arbitrarily small.
A Cognitive Neuroscience Perspective on Embodied Language for Human-Robot Cooperation
ERIC Educational Resources Information Center
Madden, Carol; Hoen, Michel; Dominey, Peter Ford
2010-01-01
This article addresses issues in embodied sentence processing from a "cognitive neural systems" approach that combines analysis of the behavior in question, analysis of the known neurophysiological bases of this behavior, and the synthesis of a neuro-computational model of embodied sentence processing that can be applied to and tested in the…
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
Bakkum, Douglas J.; Gamblen, Philip M.; Ben-Ary, Guy; Chao, Zenas C.; Potter, Steve M.
2007-01-01
Here, we and others describe an unusual neurorobotic project, a merging of art and science called MEART, the semi-living artist. We built a pneumatically actuated robotic arm to create drawings, as controlled by a living network of neurons from rat cortex grown on a multi-electrode array (MEA). Such embodied cultured networks formed a real-time closed-loop system which could now behave and receive electrical stimulation as feedback on its behavior. We used MEART and simulated embodiments, or animats, to study the network mechanisms that produce adaptive, goal-directed behavior. This approach to neural interfacing will help instruct the design of other hybrid neural-robotic systems we call hybrots. The interfacing technologies and algorithms developed have potential applications in responsive deep brain stimulation systems and for motor prosthetics using sensory components. In a broader context, MEART educates the public about neuroscience, neural interfaces, and robotics. It has paved the way for critical discussions on the future of bio-art and of biotechnology. PMID:18958276
Autonomous Robotic Inspection in Tunnels
NASA Astrophysics Data System (ADS)
Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.
2016-06-01
In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.
A Hybrid Robotic Control System Using Neuroblastoma Cultures
NASA Astrophysics Data System (ADS)
Ferrández, J. M.; Lorente, V.; Cuadra, J. M.; Delapaz, F.; Álvarez-Sánchez, José Ramón; Fernández, E.
The main objective of this work is to analyze the computing capabilities of human neuroblastoma cultured cells and to define connection schemes for controlling a robot behavior. Multielectrode Array (MEA) setups have been designed for direct culturing neural cells over silicon or glass substrates, providing the capability to stimulate and record simultaneously populations of neural cells. This paper describes the process of growing human neuroblastoma cells over MEA substrates and tries to modulate the natural physiologic responses of these cells by tetanic stimulation of the culture. We show that the large neuroblastoma networks developed in cultured MEAs are capable of learning: establishing numerous and dynamic connections, with modifiability induced by external stimuli and we propose an hybrid system for controlling a robot to avoid obstacles.
A real-time spiking cerebellum model for learning robot control.
Carrillo, Richard R; Ros, Eduardo; Boucheny, Christian; Coenen, Olivier J-M D
2008-01-01
We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the system's ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts.
Validation of a robotic balance system for investigations in the control of human standing balance.
Luu, Billy L; Huryn, Thomas P; Van der Loos, H F Machiel; Croft, Elizabeth A; Blouin, Jean-Sébastien
2011-08-01
Previous studies have shown that human body sway during standing approximates the mechanics of an inverted pendulum pivoted at the ankle joints. In this study, a robotic balance system incorporating a Stewart platform base was developed to provide a new technique to investigate the neural mechanisms involved in standing balance. The robotic system, programmed with the mechanics of an inverted pendulum, controlled the motion of the body in response to a change in applied ankle torque. The ability of the robotic system to replicate the load properties of standing was validated by comparing the load stiffness generated when subjects balanced their own body to the robot's mechanical load programmed with a low (concentrated-mass model) or high (distributed-mass model) inertia. The results show that static load stiffness was not significantly (p > 0.05) different for standing and the robotic system. Dynamic load stiffness for the robotic system increased with the frequency of sway, as predicted by the mechanics of an inverted pendulum, with the higher inertia being accurately matched to the load properties of the human body. This robotic balance system accurately replicated the physical model of standing and represents a useful tool to simulate the dynamics of a standing person. © 2011 IEEE
Evolutionary robotics simulations help explain why reciprocity is rare in nature
André, Jean-Baptiste; Nolfi, Stefano
2016-01-01
The relative rarity of reciprocity in nature, contrary to theoretical predictions that it should be widespread, is currently one of the major puzzles in social evolution theory. Here we use evolutionary robotics to solve this puzzle. We show that models based on game theory are misleading because they neglect the mechanics of behavior. In a series of experiments with simulated robots controlled by artificial neural networks, we find that reciprocity does not evolve, and show that this results from a general constraint that likely also prevents it from evolving in the wild. Reciprocity can evolve if it requires very few mutations, as is usually assumed in evolutionary game theoretic models, but not if, more realistically, it requires the accumulation of many adaptive mutations. PMID:27616139
von Twickel, Arndt; Büschges, Ansgar; Pasemann, Frank
2011-02-01
This article presents modular recurrent neural network controllers for single legs of a biomimetic six-legged robot equipped with standard DC motors. Following arguments of Ekeberg et al. (Arthropod Struct Dev 33:287-300, 2004), completely decentralized and sensori-driven neuro-controllers were derived from neuro-biological data of stick-insects. Parameters of the controllers were either hand-tuned or optimized by an evolutionary algorithm. Employing identical controller structures, qualitatively similar behaviors were achieved for robot and for stick insect simulations. For a wide range of perturbing conditions, as for instance changing ground height or up- and downhill walking, swing as well as stance control were shown to be robust. Behavioral adaptations, like varying locomotion speeds, could be achieved by changes in neural parameters as well as by a mechanical coupling to the environment. To a large extent the simulated walking behavior matched biological data. For example, this was the case for body support force profiles and swing trajectories under varying ground heights. The results suggest that the single-leg controllers are suitable as modules for hexapod controllers, and they might therefore bridge morphological- and behavioral-based approaches to stick insect locomotion control.
Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jörntell, Henrik; Kappers, Astrid M.L.; Kyriakopoulos, Kostas; Albu-Schäffer, Alin; Castellini, Claudio; Bicchi, Antonio
2017-01-01
The term ‘synergy’ – from the Greek synergia – means ‘working together’. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project “The Hand Embodied” (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies. PMID:26923030
Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jörntell, Henrik; Kappers, Astrid M L; Kyriakopoulos, Kostas; Albu-Schäffer, Alin; Castellini, Claudio; Bicchi, Antonio
2016-07-01
The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project "The Hand Embodied" (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jörntell, Henrik; Kappers, Astrid M. L.; Kyriakopoulos, Kostas; Albu-Schäffer, Alin; Castellini, Claudio; Bicchi, Antonio
2016-07-01
The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project ;The Hand Embodied; (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies.
Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo
2018-01-01
Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy. PMID:29757211
Yang, Guanci; Yang, Jing; Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo
2018-05-12
Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy.
Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks
NASA Technical Reports Server (NTRS)
Mclauchlan, Robert A.
1989-01-01
Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.
NASA Astrophysics Data System (ADS)
Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo
2002-04-01
Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
Biobotic insect swarm based sensor networks for search and rescue
NASA Astrophysics Data System (ADS)
Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong
2014-06-01
The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural Coding for Effective Rehabilitation
2014-01-01
Successful neurological rehabilitation depends on accurate diagnosis, effective treatment, and quantitative evaluation. Neural coding, a technology for interpretation of functional and structural information of the nervous system, has contributed to the advancements in neuroimaging, brain-machine interface (BMI), and design of training devices for rehabilitation purposes. In this review, we summarized the latest breakthroughs in neuroimaging from microscale to macroscale levels with potential diagnostic applications for rehabilitation. We also reviewed the achievements in electrocorticography (ECoG) coding with both animal models and human beings for BMI design, electromyography (EMG) interpretation for interaction with external robotic systems, and robot-assisted quantitative evaluation on the progress of rehabilitation programs. Future rehabilitation would be more home-based, automatic, and self-served by patients. Further investigations and breakthroughs are mainly needed in aspects of improving the computational efficiency in neuroimaging and multichannel ECoG by selection of localized neuroinformatics, validation of the effectiveness in BMI guided rehabilitation programs, and simplification of the system operation in training devices. PMID:25258708
Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio
2015-09-01
Mainstream approaches to modelling cognitive processes have typically focused on (1) reproducing their neural underpinning, without regard to sensory-motor systems and (2) producing a single, ideal computational model. Evolutionary robotics is an alternative possibility to bridge the gap between neural substrate and behavior by means of a sensory-motor apparatus, and a powerful tool to build a population of individuals rather than a single model. We trained 4 populations of neurorobots, equipped with a pan/tilt/zoom camera, and provided with different types of motor control in order to perform a cancellation task, often used to tap spatial cognition. Neurorobots' eye movements were controlled by (a) position, (b) velocity, (c) simulated muscles and (d) simulated muscles with fixed level of zoom. Neurorobots provided with muscle and velocity control showed better performances than those controlled in position. This is an interesting result since muscle control can be considered a particular type of position control. Finally, neurorobots provided with muscle control and zoom outperformed those without zooming ability.
Adaptive and predictive control of a simulated robot arm.
Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo
2013-06-01
In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).
Xu, Zhiming; So, Rosa Q; Toe, Kyaw Kyar; Ang, Kai Keng; Guan, Cuntai
2014-01-01
This paper presents an asynchronously intracortical brain-computer interface (BCI) which allows the subject to continuously drive a mobile robot. This system has a great implication for disabled patients to move around. By carefully designing a multiclass support vector machine (SVM), the subject's self-paced instantaneous movement intents are continuously decoded to control the mobile robot. In particular, we studied the stability of the neural representation of the movement directions. Experimental results on the nonhuman primate showed that the overt movement directions were stably represented in ensemble of recorded units, and our SVM classifier could successfully decode such movements continuously along the desired movement path. However, the neural representation of the stop state for the self-paced control was not stably represented and could drift.
Understanding the dynamical control of animal movement
NASA Astrophysics Data System (ADS)
Edwards, Donald
2008-03-01
Over the last 50 years, neurophysiologists have described many neural circuits that transform sensory input into motor commands, while biomechanicians and behavioral biologists have described many patterns of animal movement that occur in response to sensory input. Attempts to link these two have been frustrated by our technical inability to record from the necessary neurons in a freely behaving animal. As a result, we don't know how these neural circuits function in the closed loop context of free behavior, where the sensory and motor context changes on a millisecond time-scale. To address this problem, we have developed a software package, AnimatLab (www.AnimatLab.com), that enables users to reconstruct an animal's body and its relevant neural circuits, to link them at the sensory and motor ends, and through simulation, to test their ability to reproduce appropriate patterns of the animal's movements in a simulated Newtonian world. A Windows-based program, AnimatLab consists of a neural editor, a body editor, a world editor, stimulus and recording facilities, neural and physics engines, and an interactive 3-D graphical display. We have used AnimatLab to study three patterns of behavior: the grasshopper jump, crayfish escape, and crayfish leg movements used in postural control, walking, reaching and grasping. In each instance, the simulation helped identify constraints on both nervous function and biomechanical performance that have provided the basis for new experiments. Colleagues elsewhere have begun to use AnimatLab to study control of paw movements in cats and postural control in humans. We have also used AnimatLab simulations to guide the development of an autonomous hexapod robot in which the neural control circuitry is downloaded to the robot from the test computer.
Use of 3D vision for fine robot motion
NASA Technical Reports Server (NTRS)
Lokshin, Anatole; Litwin, Todd
1989-01-01
An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.
NASA Astrophysics Data System (ADS)
Fu, Liyue; Song, Aiguo
2018-02-01
In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.
First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)
NASA Technical Reports Server (NTRS)
Griffin, Sandy (Editor)
1987-01-01
Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.
Twitching in Sensorimotor Development from Sleeping Rats to Robots
Marques, Hugo Gravato; Iida, Fumiya
2013-01-01
It is still not known how the “rudimentary” movements of fetuses and infants are transformed into the coordinated, flexible, and adaptive movements of adults. In addressing this important issue, we consider a behavior that has been perennially viewed as a functionless by-product of a dreaming brain: the jerky limb movements called myoclonic twitches. Recent work has identified the neural mechanisms that produce twitching as well as those that convey sensory feedback from twitching limbs to the spinal cord and brain. In turn, these mechanistic insights have helped inspire new ideas about the functional roles that twitching might play in the self-organization of spinal and supraspinal sensorimotor circuits. Striking support for these ideas is coming from the field of developmental robotics: When twitches are mimicked in robot models of the musculoskeletal system, basic neural circuitry self-organizes. Mutually inspired biological and synthetic approaches promise not only to produce better robots, but also to solve fundamental problems concerning the developmental origins of sensorimotor maps in the spinal cord and brain. PMID:23787051
Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.
1991-01-01
Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.
A neural learning classifier system with self-adaptive constructivism for mobile robot control.
Hurst, Jacob; Bull, Larry
2006-01-01
For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.
A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach
Owaki, Dai; Goda, Masashi; Miyazawa, Sakiko; Ishiguro, Akio
2017-01-01
Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns. PMID:28649197
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Verification hybrid control of a wheeled mobile robot and manipulator
NASA Astrophysics Data System (ADS)
Muszynska, Magdalena; Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz
2016-04-01
In this article, innovative approaches to realization of the wheeled mobile robots and manipulator tracking are presented. Conceptions include application of the neural-fuzzy systems to compensation of the controlled system's nonlinearities in the tracking control task. Proposed control algorithms work on-line, contain structure, that adapt to the changeable work conditions of the controlled systems, and do not require the preliminary learning. The algorithm was verification on the real object which was a Scorbot - ER 4pc robotic manipulator and a Pioneer - 2DX mobile robot.
Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A
2017-07-01
Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.
A design philosophy for multi-layer neural networks with applications to robot control
NASA Technical Reports Server (NTRS)
Vadiee, Nader; Jamshidi, MO
1989-01-01
A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.
Parisi, Domenico
2010-01-01
Trying to understand human language by constructing robots that have language necessarily implies an embodied view of language, where the meaning of linguistic expressions is derived from the physical interactions of the organism with the environment. The paper describes a neural model of language according to which the robot's behaviour is controlled by a neural network composed of two sub-networks, one dedicated to the non-linguistic interactions of the robot with the environment and the other one to processing linguistic input and producing linguistic output. We present the results of a number of simulations using the model and we suggest how the model can be used to account for various language-related phenomena such as disambiguation, the metaphorical use of words, the pervasive idiomaticity of multi-word expressions, and mental life as talking to oneself. The model implies a view of the meaning of words and multi-word expressions as a temporal process that takes place in the entire brain and has no clearly defined boundaries. The model can also be extended to emotional words if we assume that an embodied view of language includes not only the interactions of the robot's brain with the external environment but also the interactions of the brain with what is inside the body.
Basic emotions and adaptation. A computational and evolutionary model.
Pacella, Daniela; Ponticorvo, Michela; Gigliotta, Onofrio; Miglino, Orazio
2017-01-01
The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions.
Neural network system for purposeful behavior based on foveal visual preprocessor
NASA Astrophysics Data System (ADS)
Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.
1996-10-01
Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.
A Space Station robot walker and its shared control software
NASA Technical Reports Server (NTRS)
Xu, Yangsheng; Brown, Ben; Aoki, Shigeru; Yoshida, Tetsuji
1994-01-01
In this paper, we first briefly overview the update of the self-mobile space manipulator (SMSM) configuration and testbed. The new robot is capable of projecting cameras anywhere interior or exterior of the Space Station Freedom (SSF), and will be an ideal tool for inspecting connectors, structures, and other facilities on SSF. Experiments have been performed under two gravity compensation systems and a full-scale model of a segment of SSF. This paper presents a real-time shared control architecture that enables the robot to coordinate autonomous locomotion and teleoperation input for reliable walking on SSF. Autonomous locomotion can be executed based on a CAD model and off-line trajectory planning, or can be guided by a vision system with neural network identification. Teleoperation control can be specified by a real-time graphical interface and a free-flying hand controller. SMSM will be a valuable assistant for astronauts in inspection and other EVA missions.
Simple robot suggests physical interlimb communication is essential for quadruped walking
Owaki, Dai; Kano, Takeshi; Nagasawa, Ko; Tero, Atsushi; Ishiguro, Akio
2013-01-01
Quadrupeds have versatile gait patterns, depending on the locomotion speed, environmental conditions and animal species. These locomotor patterns are generated via the coordination between limbs and are partly controlled by an intraspinal neural network called the central pattern generator (CPG). Although this forms the basis for current control paradigms of interlimb coordination, the mechanism responsible for interlimb coordination remains elusive. By using a minimalistic approach, we have developed a simple-structured quadruped robot, with the help of which we propose an unconventional CPG model that consists of four decoupled oscillators with only local force feedback in each leg. Our robot exhibits good adaptability to changes in weight distribution and walking speed simply by responding to local feedback, and it can mimic the walking patterns of actual quadrupeds. Our proposed CPG-based control method suggests that physical interaction between legs during movements is essential for interlimb coordination in quadruped walking. PMID:23097501
Simple robot suggests physical interlimb communication is essential for quadruped walking.
Owaki, Dai; Kano, Takeshi; Nagasawa, Ko; Tero, Atsushi; Ishiguro, Akio
2013-01-06
Quadrupeds have versatile gait patterns, depending on the locomotion speed, environmental conditions and animal species. These locomotor patterns are generated via the coordination between limbs and are partly controlled by an intraspinal neural network called the central pattern generator (CPG). Although this forms the basis for current control paradigms of interlimb coordination, the mechanism responsible for interlimb coordination remains elusive. By using a minimalistic approach, we have developed a simple-structured quadruped robot, with the help of which we propose an unconventional CPG model that consists of four decoupled oscillators with only local force feedback in each leg. Our robot exhibits good adaptability to changes in weight distribution and walking speed simply by responding to local feedback, and it can mimic the walking patterns of actual quadrupeds. Our proposed CPG-based control method suggests that physical interaction between legs during movements is essential for interlimb coordination in quadruped walking.
Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian
2009-01-01
Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420
NASA Astrophysics Data System (ADS)
Chen, Dechao; Zhang, Yunong
2017-10-01
Dual-arm redundant robot systems are usually required to handle primary tasks, repetitively and synchronously in practical applications. In this paper, a jerk-level synchronous repetitive motion scheme is proposed to remedy the joint-angle drift phenomenon and achieve the synchronous control of a dual-arm redundant robot system. The proposed scheme is novelly resolved at jerk level, which makes the joint variables, i.e. joint angles, joint velocities and joint accelerations, smooth and bounded. In addition, two types of dynamics algorithms, i.e. gradient-type (G-type) and zeroing-type (Z-type) dynamics algorithms, for the design of repetitive motion variable vectors, are presented in detail with the corresponding circuit schematics. Subsequently, the proposed scheme is reformulated as two dynamical quadratic programs (DQPs) and further integrated into a unified DQP (UDQP) for the synchronous control of a dual-arm robot system. The optimal solution of the UDQP is found by the piecewise-linear projection equation neural network. Moreover, simulations and comparisons based on a six-degrees-of-freedom planar dual-arm redundant robot system substantiate the operation effectiveness and tracking accuracy of the robot system with the proposed scheme for repetitive motion and synchronous control.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms.
Athanasiou, Alkinoos; Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas; Astaras, Alexander; Bamidis, Panagiotis D
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms
Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality. PMID:28948168
Bong Seok Park; Jin Bae Park; Yoon Ho Choi
2011-08-01
We present a leader-follower-based adaptive formation control method for electrically driven nonholonomic mobile robots with limited information. First, an adaptive observer is developed under the condition that the velocity measurement is not available. With the proposed adaptive observer, the formation control part is designed to achieve the desired formation and guarantee the collision avoidance. In addition, neural network is employed to compensate the actuator saturation, and the projection algorithm is used to estimate the velocity information of the leader. It is shown, by using the Lyapunov theory, that all errors of the closed-loop system are uniformly ultimately bounded. Simulation results are presented to illustrate the performance of the proposed control system.
Pruning artificial neural networks using neural complexity measures.
Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F
2008-10-01
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
NASA Astrophysics Data System (ADS)
Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi
This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.
Yamashita, Yuichi; Tani, Jun
2008-01-01
It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems. PMID:18989398
Grosmaire, Anne Gaëlle; Battini, Elena
2017-01-01
A better understanding of the neural substrates that underlie motor recovery after stroke has led to the development of innovative rehabilitation strategies and tools that incorporate key elements of motor skill relearning, that is, intensive motor training involving goal-oriented repeated movements. Robotic devices for the upper limb are increasingly used in rehabilitation. Studies have demonstrated the effectiveness of these devices in reducing motor impairments, but less so for the improvement of upper limb function. Other studies have begun to investigate the benefits of combined approaches that target muscle function (functional electrical stimulation and botulinum toxin injections), modulate neural activity (noninvasive brain stimulation), and enhance motivation (virtual reality) in an attempt to potentialize the benefits of robot-mediated training. The aim of this paper is to overview the current status of such combined treatments and to analyze the rationale behind them. PMID:29057269
Soekadar, Surjo R; Witkowski, Matthias; Vitiello, Nicola; Birbaumer, Niels
2015-06-01
The loss of hand function can result in severe physical and psychosocial impairment. Thus, compensation of a lost hand function using assistive robotics that can be operated in daily life is very desirable. However, versatile, intuitive, and reliable control of assistive robotics is still an unsolved challenge. Here, we introduce a novel brain/neural-computer interaction (BNCI) system that integrates electroencephalography (EEG) and electrooculography (EOG) to improve control of assistive robotics in daily life environments. To evaluate the applicability and performance of this hybrid approach, five healthy volunteers (HV) (four men, average age 26.5 ± 3.8 years) and a 34-year-old patient with complete finger paralysis due to a brachial plexus injury (BPI) used EEG (condition 1) and EEG/EOG (condition 2) to control grasping motions of a hand exoskeleton. All participants were able to control the BNCI system (BNCI control performance HV: 70.24 ± 16.71%, BPI: 65.93 ± 24.27%), but inclusion of EOG significantly improved performance across all participants (HV: 80.65 ± 11.28, BPI: 76.03 ± 18.32%). This suggests that hybrid BNCI systems can achieve substantially better control over assistive devices, e.g., a hand exoskeleton, than systems using brain signals alone and thus may increase applicability of brain-controlled assistive devices in daily life environments.
Improving Robot Motor Learning with Negatively Valenced Reinforcement Signals
Navarro-Guerrero, Nicolás; Lowe, Robert J.; Wermter, Stefan
2017-01-01
Both nociception and punishment signals have been used in robotics. However, the potential for using these negatively valenced types of reinforcement learning signals for robot learning has not been exploited in detail yet. Nociceptive signals are primarily used as triggers of preprogrammed action sequences. Punishment signals are typically disembodied, i.e., with no or little relation to the agent-intrinsic limitations, and they are often used to impose behavioral constraints. Here, we provide an alternative approach for nociceptive signals as drivers of learning rather than simple triggers of preprogrammed behavior. Explicitly, we use nociception to expand the state space while we use punishment as a negative reinforcement learning signal. We compare the performance—in terms of task error, the amount of perceived nociception, and length of learned action sequences—of different neural networks imbued with punishment-based reinforcement signals for inverse kinematic learning. We contrast the performance of a version of the neural network that receives nociceptive inputs to that without such a process. Furthermore, we provide evidence that nociception can improve learning—making the algorithm more robust against network initializations—as well as behavioral performance by reducing the task error, perceived nociception, and length of learned action sequences. Moreover, we provide evidence that punishment, at least as typically used within reinforcement learning applications, may be detrimental in all relevant metrics. PMID:28420976
Orthogonal Patterns In A Binary Neural Network
NASA Technical Reports Server (NTRS)
Baram, Yoram
1991-01-01
Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.
Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics.
Zhang, Jinao; Zhong, Yongmin; Gu, Chengfan
2018-05-30
Soft tissue deformation modelling forms the basis of development of surgical simulation, surgical planning and robotic-assisted minimally invasive surgery. This paper presents a new methodology for modelling of soft tissue deformation based on reaction-diffusion mechanics via neural dynamics. The potential energy stored in soft tissues due to a mechanical load to deform tissues away from their rest state is treated as the equivalent transmembrane potential energy, and it is distributed in the tissue masses in the manner of reaction-diffusion propagation of nonlinear electrical waves. The reaction-diffusion propagation of mechanical potential energy and nonrigid mechanics of motion are combined to model soft tissue deformation and its dynamics, both of which are further formulated as the dynamics of cellular neural networks to achieve real-time computational performance. The proposed methodology is implemented with a haptic device for interactive soft tissue deformation with force feedback. Experimental results demonstrate that the proposed methodology exhibits nonlinear force-displacement relationship for nonlinear soft tissue deformation. Homogeneous, anisotropic and heterogeneous soft tissue material properties can be modelled through the inherent physical properties of mass points. Graphical abstract Soft tissue deformation modelling with haptic feedback via neural dynamics-based reaction-diffusion mechanics.
Moioli, Renan C; Vargas, Patricia A; Husbands, Phil
2012-09-01
Oscillatory activity is ubiquitous in nervous systems, with solid evidence that synchronisation mechanisms underpin cognitive processes. Nevertheless, its informational content and relationship with behaviour are still to be fully understood. In addition, cognitive systems cannot be properly appreciated without taking into account brain-body- environment interactions. In this paper, we developed a model based on the Kuramoto Model of coupled phase oscillators to explore the role of neural synchronisation in the performance of a simulated robotic agent in two different minimally cognitive tasks. We show that there is a statistically significant difference in performance and evolvability depending on the synchronisation regime of the network. In both tasks, a combination of information flow and dynamical analyses show that networks with a definite, but not too strong, propensity for synchronisation are more able to reconfigure, to organise themselves functionally and to adapt to different behavioural conditions. The results highlight the asymmetry of information flow and its behavioural correspondence. Importantly, it also shows that neural synchronisation dynamics, when suitably flexible and reconfigurable, can generate minimally cognitive embodied behaviour.
Design and control of an IPMC wormlike robot.
Arena, Paolo; Bonomo, Claudia; Fortuna, Luigi; Frasca, Mattia; Graziani, Salvatore
2006-10-01
This paper presents an innovative wormlike robot controlled by cellular neural networks (CNNs) and made of an ionic polymer-metal composite (IPMC) self-actuated skeleton. The IPMC actuators, from which it is made of, are new materials that behave similarly to biological muscles. The idea that inspired the work is the possibility of using IPMCs to design autonomous moving structures. CNNs have already demonstrated their powerfulness as new structures for bio-inspired locomotion generation and control. The control scheme for the proposed IPMC moving structure is based on CNNs. The wormlike robot is totally made of IPMCs, and each actuator has to carry its own weight. All the actuators are connected together without using any other additional part, thereby constituting the robot structure itself. Worm locomotion is performed by bending the actuators sequentially from "tail" to "head," imitating the traveling wave observed in real-world undulatory locomotion. The activation signals are generated by a CNN. In the authors' opinion, the proposed strategy represents a promising solution in the field of autonomous and light structures that are capable of reconfiguring and moving in line with spatial-temporal dynamics generated by CNNs.
Causal network in a deafferented non-human primate brain.
Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G
2015-01-01
De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).
Pneumatic artificial muscle actuators for compliant robotic manipulators
NASA Astrophysics Data System (ADS)
Robinson, Ryan Michael
Robotic systems are increasingly being utilized in applications that require interaction with humans. In order to enable safe physical human-robot interaction, light weight and compliant manipulation are desirable. These requirements are problematic for many conventional actuation systems, which are often heavy, and typically use high stiffness to achieve high performance, leading to large impact forces upon collision. However, pneumatic artificial muscles (PAMs) are actuators that can satisfy these safety requirements while offering power-to-weight ratios comparable to those of conventional actuators. PAMs are extremely lightweight actuators that produce force in response to pressurization. These muscles demonstrate natural compliance, but have a nonlinear force-contraction profile that complicates modeling and control. This body of research presents solutions to the challenges associated with the implementation of PAMs as actuators in robotic manipulators, particularly with regard to modeling, design, and control. An existing PAM force balance model was modified to incorporate elliptic end geometry and a hyper-elastic constitutive relationship, dramatically improving predictions of PAM behavior at high contraction. Utilizing this improved model, two proof-of-concept PAM-driven manipulators were designed and constructed; design features included parallel placement of actuators and a tendon-link joint design. Genetic algorithm search heuristics were employed to determine an optimal joint geometry; allowing a manipulator to achieve a desired torque profile while minimizing the required PAM pressure. Performance of the manipulators was evaluated in both simulation and experiment employing various linear and nonlinear control strategies. These included output feedback techniques, such as proportional-integral-derivative (PID) and fuzzy logic, a model-based control for computed torque, and more advanced controllers, such as sliding mode, adaptive sliding mode, and adaptive neural network control. Results demonstrated the benefits of an accurate model in model-based control, and the advantages of adaptive neural network control when a model is unavailable or variations in payload are expected. Lastly, a variable recruitment strategy was applied to a group of parallel muscles actuating a common joint. Increased manipulator efficiency was observed when fewer PAMs were activated, justifying the use of variable recruitment strategies. Overall, this research demonstrates the benefits of pneumatic artificial muscles as actuators in robotics applications. It demonstrates that PAM-based manipulators can be well-modeled and can achieve high tracking accuracy over a wide range of payloads and inputs while maintaining natural compliance.
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2
NASA Technical Reports Server (NTRS)
Lea, Robert N. (Editor); Villarreal, James A. (Editor)
1991-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.
Neural Network Grasping Controller for Continuum Robots
2006-01-01
string encoders attached to the base of section 1 and optical encoders located at the end plates of section 1 and 2. The cables from each of the...string encoders run the entire length of the arm through the optical encoders at the lower sections, as seen in Figure 1. This configuration enables the...encoders at the base section and the optical encoders at the end plates of the distal sections, there were a number of protrusions on the surface of the arm
Proceedings of the 1986 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-01-01
This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.
Towards an SEMG-based tele-operated robot for masticatory rehabilitation.
Kalani, Hadi; Moghimi, Sahar; Akbarzadeh, Alireza
2016-08-01
This paper proposes a real-time trajectory generation for a masticatory rehabilitation robot based on surface electromyography (SEMG) signals. We used two Gough-Stewart robots. The first robot was used as a rehabilitation robot while the second robot was developed to model the human jaw system. The legs of the rehabilitation robot were controlled by the SEMG signals of a tele-operator to reproduce the masticatory motion in the human jaw, supposedly mounted on the moving platform, through predicting the location of a reference point. Actual jaw motions and the SEMG signals from the masticatory muscles were recorded and used as output and input, respectively. Three different methods, namely time-delayed neural networks, time delayed fast orthogonal search, and time-delayed Laguerre expansion technique, were employed and compared to predict the kinematic parameters. The optimal model structures as well as the input delays were obtained for each model and each subject through a genetic algorithm. Equations of motion were obtained by the virtual work method. Fuzzy method was employed to develop a fuzzy impedance controller. Moreover, a jaw model was developed to demonstrate the time-varying behavior of the muscle lengths during the rehabilitation process. The three modeling methods were capable of providing reasonably accurate estimations of the kinematic parameters, although the accuracy and training/validation speed of time-delayed fast orthogonal search were higher than those of the other two aforementioned methods. Also, during a simulation study, the fuzzy impedance scheme proved successful in controlling the moving platform for the accurate navigation of the reference point in the desired trajectory. SEMG has been widely used as a control command for prostheses and exoskeleton robots. However, in the current study by employing the proposed rehabilitation robot the complete continuous profile of the clenching motion was reproduced in the sagittal plane. Copyright © 2016. Published by Elsevier Ltd.
Modeling the thermotaxis behavior of C.elegans based on the artificial neural network.
Li, Mingxu; Deng, Xin; Wang, Jin; Chen, Qiaosong; Tang, Yun
2016-07-03
ASBTRACT This research aims at modeling the thermotaxis behavior of C.elegans which is a kind of nematode with full clarified neuronal connections. Firstly, this work establishes the motion model which can perform the undulatory locomotion with turning behavior. Secondly, the thermotaxis behavior is modeled by nonlinear functions and the nonlinear functions are learned by artificial neural network. Once the artificial neural networks have been well trained, they can perform the desired thermotaxis behavior. Last, several testing simulations are carried out to verify the effectiveness of the model for thermotaxis behavior. This work also analyzes the different performances of the model under different environments. The testing results reveal the essence of the thermotaxis of C.elegans to some extent, and theoretically support the research on the navigation of the crawling robots.
Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji
2015-01-01
Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719
Espinal, Andres; Rostro-Gonzalez, Horacio; Carpio, Martin; Guerra-Hernandez, Erick I.; Ornelas-Rodriguez, Manuel; Sotelo-Figueroa, Marco
2016-01-01
This paper presents a method to design Spiking Central Pattern Generators (SCPGs) to achieve locomotion at different frequencies on legged robots. It is validated through embedding its designs into a Field-Programmable Gate Array (FPGA) and implemented on a real hexapod robot. The SCPGs are automatically designed by means of a Christiansen Grammar Evolution (CGE)-based methodology. The CGE performs a solution for the configuration (synaptic weights and connections) for each neuron in the SCPG. This is carried out through the indirect representation of candidate solutions that evolve to replicate a specific spike train according to a locomotion pattern (gait) by measuring the similarity between the spike trains and the SPIKE distance to lead the search to a correct configuration. By using this evolutionary approach, several SCPG design specifications can be explicitly added into the SPIKE distance-based fitness function, such as looking for Spiking Neural Networks (SNNs) with minimal connectivity or a Central Pattern Generator (CPG) able to generate different locomotion gaits only by changing the initial input stimuli. The SCPG designs have been successfully implemented on a Spartan 6 FPGA board and a real time validation on a 12 Degrees Of Freedom (DOFs) hexapod robot is presented. PMID:27516737
Adaptive Proactive Inhibitory Control for Embedded Real-Time Applications
Yang, Shufan; McGinnity, T. Martin; Wong-Lin, KongFatt
2012-01-01
Psychologists have studied the inhibitory control of voluntary movement for many years. In particular, the countermanding of an impending action has been extensively studied. In this work, we propose a neural mechanism for adaptive inhibitory control in a firing-rate type model based on current findings in animal electrophysiological and human psychophysical experiments. We then implement this model on a field-programmable gate array (FPGA) prototyping system, using dedicated real-time hardware circuitry. Our results show that the FPGA-based implementation can run in real-time while achieving behavioral performance qualitatively suggestive of the animal experiments. Implementing such biological inhibitory control in an embedded device can lead to the development of control systems that may be used in more realistic cognitive robotics or in neural prosthetic systems aiding human movement control. PMID:22701420
Semprini, Marianna; Laffranchi, Matteo; Sanguineti, Vittorio; Avanzino, Laura; De Icco, Roberto; De Michieli, Lorenzo; Chiappalone, Michela
2018-01-01
Neurological diseases causing motor/cognitive impairments are among the most common causes of adult-onset disability. More than one billion of people are affected worldwide, and this number is expected to increase in upcoming years, because of the rapidly aging population. The frequent lack of complete recovery makes it desirable to develop novel neurorehabilitative treatments, suited to the patients, and better targeting the specific disability. To date, rehabilitation therapy can be aided by the technological support of robotic-based therapy, non-invasive brain stimulation, and neural interfaces. In this perspective, we will review the above methods by referring to the most recent advances in each field. Then, we propose and discuss current and future approaches based on the combination of the above. As pointed out in the recent literature, by combining traditional rehabilitation techniques with neuromodulation, biofeedback recordings and/or novel robotic and wearable assistive devices, several studies have proven it is possible to sensibly improve the amount of recovery with respect to traditional treatments. We will then discuss the possible applied research directions to maximize the outcome of a neurorehabilitation therapy, which should include the personalization of the therapy based on patient and clinician needs and preferences.
Matsuda, Eiko; Hubert, Julien; Ikegami, Takashi
2014-01-01
Vicarious trial-and-error (VTE) is a behavior observed in rat experiments that seems to suggest self-conflict. This behavior is seen mainly when the rats are uncertain about making a decision. The presence of VTE is regarded as an indicator of a deliberative decision-making process, that is, searching, predicting, and evaluating outcomes. This process is slower than automated decision-making processes, such as reflex or habituation, but it allows for flexible and ongoing control of behavior. In this study, we propose for the first time a robotic model of VTE to see if VTE can emerge just from a body-environment interaction and to show the underlying mechanism responsible for the observation of VTE and the advantages provided by it. We tried several robots with different parameters, and we have found that they showed three different types of VTE: high numbers of VTE at the beginning of learning, decreasing numbers afterward (similar VTE pattern to experiments with rats), low during the whole learning period, and high numbers all the time. Therefore, we were able to reproduce the phenomenon of VTE in a model robot using only a simple dynamical neural network with Hebbian learning, which suggests that VTE is an emergent property of a plastic and embodied neural network. From a comparison of the three types of VTE, we demonstrated that 1) VTE is associated with chaotic activity of neurons in our model and 2) VTE-showing robots were robust to environmental perturbations. We suggest that the instability of neuronal activity found in VTE allows ongoing learning to rebuild its strategy continuously, which creates robust behavior. Based on these results, we suggest that VTE is caused by a similar mechanism in biology and leads to robust decision making in an analogous way.
Tessadori, Jacopo; Bisio, Marta; Martinoia, Sergio; Chiappalone, Michela
2012-01-01
Behaviors, from simple to most complex, require a two-way interaction with the environment and the contribution of different brain areas depending on the orchestrated activation of neuronal assemblies. In this work we present a new hybrid neuro-robotic architecture based on a neural controller bi-directionally connected to a virtual robot implementing a Braitenberg vehicle aimed at avoiding obstacles. The robot is characterized by proximity sensors and wheels, allowing it to navigate into a circular arena with obstacles of different sizes. As neural controller, we used hippocampal cultures dissociated from embryonic rats and kept alive over Micro Electrode Arrays (MEAs) for 3–8 weeks. The developed software architecture guarantees a bi-directional exchange of information between the natural and the artificial part by means of simple linear coding/decoding schemes. We used two different kinds of experimental preparation: “random” and “modular” populations. In the second case, the confinement was assured by a polydimethylsiloxane (PDMS) mask placed over the surface of the MEA device, thus defining two populations interconnected via specific microchannels. The main results of our study are: (i) neuronal cultures can be successfully interfaced to an artificial agent; (ii) modular networks show a different dynamics with respect to random culture, both in terms of spontaneous and evoked electrophysiological patterns; (iii) the robot performs better if a reinforcement learning paradigm (i.e., a tetanic stimulation delivered to the network following each collision) is activated, regardless of the modularity of the culture; (iv) the robot controlled by the modular network further enhances its capabilities in avoiding obstacles during the short-term plasticity trial. The developed paradigm offers a new framework for studying, in simplified model systems, neuro-artificial bi-directional interfaces for the development of new strategies for brain-machine interaction. PMID:23248586
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.
Walter, Florian; Röhrbein, Florian; Knoll, Alois
2015-12-01
The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.
Neural learning of constrained nonlinear transformations
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Gulati, Sandeep; Zak, Michail
1989-01-01
Two issues that are fundamental to developing autonomous intelligent robots, namely, rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented.
The other half of the embodied mind.
Parisi, Domenico
2011-01-01
Embodied theories of mind tend to be theories of the cognitive half of the mind and to ignore its emotional half while a complete theory of the mind should account for both halves. Robots are a new way of expressing theories of the mind which are less ambiguous and more capable to generate specific and non-controversial predictions than verbally expressed theories. We outline a simple robotic model of emotional states as states of a sub-part of the neural network controlling the robot's behavior which has specific properties and which allows the robot to make faster and more correct motivational decisions, and we describe possible extensions of the model to account for social emotional states and for the expression of emotions that, unlike those of current "emotional" robots, are really "felt" by the robot in that they play a well-identified functional role in the robot's behavior.
The Other Half of the Embodied Mind
Parisi, Domenico
2011-01-01
Embodied theories of mind tend to be theories of the cognitive half of the mind and to ignore its emotional half while a complete theory of the mind should account for both halves. Robots are a new way of expressing theories of the mind which are less ambiguous and more capable to generate specific and non-controversial predictions than verbally expressed theories. We outline a simple robotic model of emotional states as states of a sub-part of the neural network controlling the robot's behavior which has specific properties and which allows the robot to make faster and more correct motivational decisions, and we describe possible extensions of the model to account for social emotional states and for the expression of emotions that, unlike those of current “emotional” robots, are really “felt” by the robot in that they play a well-identified functional role in the robot's behavior. PMID:21687441
NASA Astrophysics Data System (ADS)
Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting
2013-01-01
The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.
Nycz, Christopher J; Gondokaryono, Radian; Carvalho, Paulo; Patel, Nirav; Wartenberg, Marek; Pilitsis, Julie G; Fischer, Gregory S
2017-09-01
The use of magnetic resonance imaging (MRI) for guiding robotic surgical devices has shown great potential for performing precisely targeted and controlled interventions. To fully realize these benefits, devices must work safely within the tight confines of the MRI bore without negatively impacting image quality. Here we expand on previous work exploring MRI guided robots for neural interventions by presenting the mechanical design and assessment of a device for positioning, orienting, and inserting an interstitial ultrasound-based ablation probe. From our previous work we have added a 2 degree of freedom (DOF) needle driver for use with the aforementioned probe, revised the mechanical design to improve strength and function, and performed an evaluation of the mechanism's accuracy and effect on MR image quality. The result of this work is a 7-DOF MRI robot capable of positioning a needle tip and orienting it's axis with accuracy of 1.37 ± 0.06 mm and 0.79° ± 0.41°, inserting it along it's axis with an accuracy of 0.06 ± 0.07 mm , and rotating it about it's axis to an accuracy of 0.77° ± 1.31°. This was accomplished with no significant reduction in SNR caused by the robot's presence in the MRI bore, ≤ 10.3% reduction in SNR from running the robot's motors during a scan, and no visible paramagnetic artifacts.
Approaching neuropsychological tasks through adaptive neurorobots
NASA Astrophysics Data System (ADS)
Gigliotta, Onofrio; Bartolomeo, Paolo; Miglino, Orazio
2015-04-01
Neuropsychological phenomena have been modelized mainly, by the mainstream approach, by attempting to reproduce their neural substrate whereas sensory-motor contingencies have attracted less attention. In this work, we introduce a simulator based on the evolutionary robotics platform Evorobot* in order to setting up in silico neuropsychological tasks. Moreover, in this study we trained artificial embodied neurorobotic agents equipped with a pan/tilt camera, provided with different neural and motor capabilities, to solve a well-known neuropsychological test: the cancellation task in which an individual is asked to cancel target stimuli surrounded by distractors. Results showed that embodied agents provided with additional motor capabilities (a zooming/attentional actuator) outperformed simple pan/tilt agents, even those equipped with more complex neural controllers and that the zooming ability is exploited to correctly categorising presented stimuli. We conclude that since the sole neural computational power cannot explain the (artificial) cognition which emerged throughout the adaptive process, such kind of modelling approach can be fruitful in neuropsychological modelling where the importance of having a body is often neglected.
Robotics, motor learning, and neurologic recovery.
Reinkensmeyer, David J; Emken, Jeremy L; Cramer, Steven C
2004-01-01
Robotic devices are helping shed light on human motor control in health and injury. By using robots to apply novel force fields to the arm, investigators are gaining insight into how the nervous system models its external dynamic environment. The nervous system builds internal models gradually by experience and uses them in combination with impedance and feedback control strategies. Internal models are robust to environmental and neural noise, generalized across space, implemented in multiple brain regions, and developed in childhood. Robots are also being used to assist in repetitive movement practice following neurologic injury, providing insight into movement recovery. Robots can haptically assess sensorimotor performance, administer training, quantify amount of training, and improve motor recovery. In addition to providing insight into motor control, robotic paradigms may eventually enhance motor learning and rehabilitation beyond the levels possible with conventional training techniques.
Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting
NASA Astrophysics Data System (ADS)
Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing
2016-03-01
Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.
Robotic investigation on effect of stretch reflex and crossed inhibitory response on bipedal hopping
Rosendo, Andre; Ikemoto, Shuhei; Shimizu, Masahiro; Hosoda, Koh
2018-01-01
To maintain balance during dynamic locomotion, the effects of proprioceptive sensory feedback control (e.g. reflexive control) should not be ignored because of its simple sensation and fast reaction time. Scientists have identified the pathways of reflexes; however, it is difficult to investigate their effects during locomotion because locomotion is controlled by a complex neural system and current technology does not allow us to change the control pathways in living humans. To understand these effects, we construct a musculoskeletal bipedal robot, which has similar body structure and dynamics to those of a human. By conducting experiments on this robot, we investigate the effects of reflexes (stretch reflex and crossed inhibitory response) on posture during hopping, a simple and representative bouncing gait with complex dynamics. Through over 300 hopping trials, we confirm that both the stretch reflex and crossed response can contribute to reducing the lateral inclination during hopping. These reflexive pathways do not use any prior knowledge of the dynamic information of the body such as its inclination. Beyond improving the understanding of the human neural system, this study provides roboticists with biomimetic ideas for robot locomotion control. PMID:29593088
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.
Socially assistive robotics for stroke and mild TBI rehabilitation.
Matarić, Maja; Tapus, Adriana; Winstein, Carolee; Eriksson, Jon
2009-01-01
This paper describes an interdisciplinary research project aimed at developing and evaluating effective and user-friendly non-contact robot-assisted therapy, aimed at in-home use. The approach stems from the emerging field of social cognitive neuroscience that seeks to understand phenomena in terms of interactions between the social, cognitive, and neural levels of analysis. This technology-assisted therapy is designed to be safe and affordable, and relies on novel human-robot interaction methods for accelerated recovery of upper-extremity function after lesion-induced hemiparesis. The work is based on the combined expertise in the science and technology of non-contact socially assistive robotics and the clinical science of neurorehabilitation and motor learning, brought together to study how to best enhance recovery after stroke and mild traumatic brain injury. Our approach is original and promising in that it combines several ingredients that individually have been shown to be important for learning and long-term efficacy in motor neurorehabilitation: (1) intensity of task specific training and (2) engagement and self-management of goal-directed actions. These principles motivate and guide the strategies used to develop novel user activity sensing and provide the rationale for development of socially assistive robotics therapy for monitoring and coaching users toward personalized and optimal rehabilitation programs.
Optical neural network system for pose determination of spinning satellites
NASA Technical Reports Server (NTRS)
Lee, Andrew; Casasent, David
1990-01-01
An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.
NASA Astrophysics Data System (ADS)
Baumann, Erwin W.; Williams, David L.
1993-08-01
Artificial neural networks capable of learning and recalling stochastic associations between non-deterministic quantities have received relatively little attention to date. One potential application of such stochastic associative networks is the generation of sensory 'expectations' based on arbitrary subsets of sensor inputs to support anticipatory and investigate behavior in sensor-based robots. Another application of this type of associative memory is the prediction of how a scene will look in one spectral band, including noise, based upon its appearance in several other wavebands. This paper describes a semi-supervised neural network architecture composed of self-organizing maps associated through stochastic inter-layer connections. This 'Stochastic Associative Memory' (SAM) can learn and recall non-deterministic associations between multi-dimensional probability density functions. The stochastic nature of the network also enables it to represent noise distributions that are inherent in any true sensing process. The SAM architecture, training process, and initial application to sensor image prediction are described. Relationships to Fuzzy Associative Memory (FAM) are discussed.
Person detection, tracking and following using stereo camera
NASA Astrophysics Data System (ADS)
Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping
2018-04-01
Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.
Coordinated Control of Slip Ratio for Wheeled Mobile Robots Climbing Loose Sloped Terrain
Li, Zhengcai; Wang, Yang
2014-01-01
A challenging problem faced by wheeled mobile robots (WMRs) such as planetary rovers traversing loose sloped terrain is the inevitable longitudinal slip suffered by the wheels, which often leads to their deviation from the predetermined trajectory, reduced drive efficiency, and possible failures. This study investigates this problem using terramechanics analysis of the wheel-soil interaction. First, a slope-based wheel-soil interaction terramechanics model is built, and an online slip coordinated algorithm is designed based on the goal of optimal drive efficiency. An equation of state is established using the coordinated slip as the desired input and the actual slip as a state variable. To improve the robustness and adaptability of the control system, an adaptive neural network is designed. Analytical results and those of a simulation using Vortex demonstrate the significantly improved mobile performance of the WMR using the proposed control system. PMID:25276849
Coordinated control of slip ratio for wheeled mobile robots climbing loose sloped terrain.
Li, Zhengcai; Wang, Yang
2014-01-01
A challenging problem faced by wheeled mobile robots (WMRs) such as planetary rovers traversing loose sloped terrain is the inevitable longitudinal slip suffered by the wheels, which often leads to their deviation from the predetermined trajectory, reduced drive efficiency, and possible failures. This study investigates this problem using terramechanics analysis of the wheel-soil interaction. First, a slope-based wheel-soil interaction terramechanics model is built, and an online slip coordinated algorithm is designed based on the goal of optimal drive efficiency. An equation of state is established using the coordinated slip as the desired input and the actual slip as a state variable. To improve the robustness and adaptability of the control system, an adaptive neural network is designed. Analytical results and those of a simulation using Vortex demonstrate the significantly improved mobile performance of the WMR using the proposed control system.
Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A
2016-05-19
The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.
Motion planning with complete knowledge using a colored SOM.
Vleugels, J; Kok, J N; Overmars, M
1997-01-01
The motion planning problem requires that a collision-free path be determined for a robot moving amidst a fixed set of obstacles. Most neural network approaches to this problem are for the situation in which only local knowledge about the configuration space is available. The main goal of the paper is to show that neural networks are also suitable tools in situations with complete knowledge of the configuration space. In this paper we present an approach that combines a neural network and deterministic techniques. We define a colored version of Kohonen's self-organizing map that consists of two different classes of nodes. The network is presented with random configurations of the robot and, from this information, it constructs a road map of possible motions in the work space. The map is a growing network, and different nodes are used to approximate boundaries of obstacles and the Voronoi diagram of the obstacles, respectively. In a second phase, the positions of the two kinds of nodes are combined to obtain the road map. In this way a number of typical problems with small obstacles and passages are avoided, and the required number of nodes for a given accuracy is within reasonable limits. This road map is searched to find a motion connecting the given source and goal configurations of the robot. The algorithm is simple and general; the only specific computation that is required is a check for intersection of two polygons. We implemented the algorithm for planar robots allowing both translation and rotation and experiments show that compared to conventional techniques it performs well, even for difficult motion planning scenes.
MERIANS, A. S.; TUNIK, E.; FLUET, G. G.; QIU, Q.; ADAMOVICH, S. V.
2017-01-01
Aim Upper-extremity interventions for hemiparesis are a challenging aspect of stroke rehabilitation. Purpose of this paper is to report the feasibility of using virtual environments (VEs) in combination with robotics to assist recovery of hand-arm function and to present preliminary data demonstrating the potential of using sensory manipulations in VE to drive activation in targeted neural regions. Methods We trained 8 subjects for 8 three hour sessions using a library of complex VE’s integrated with robots, comparing training arm and hand separately to training arm and hand together. Instrumented gloves and hand exoskeleton were used for hand tracking and haptic effects. Haptic Master robotic arm was used for arm tracking and generating three-dimensional haptic VEs. To investigate the use of manipulations in VE to drive neural activations, we created a “virtual mirror” that subjects used while performing a unimanual task. Cortical activation was measured with functional MRI (fMRI) and transcranial magnetic stimulation. Results Both groups showed improvement in kinematics and measures of real-world function. The group trained using their arm and hand together showed greater improvement. In a stroke subject, fMRI data suggested virtual mirror feedback could activate the sensorimotor cortex contralateral to the reflected hand (ipsilateral to the moving hand) thus recruiting the lesioned hemisphere. Conclusion Gaming simulations interfaced with robotic devices provide a training medium that can modify movement patterns. In addition to showing that our VE therapies can optimize behavioral performance, we show preliminary evidence to support the potential of using specific sensory manipulations to selectively recruit targeted neural circuits. PMID:19158659
Basic emotions and adaptation. A computational and evolutionary model
2017-01-01
The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual “sensations” based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual’s life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions. PMID:29107988
ELIPS: Toward a Sensor Fusion Processor on a Chip
NASA Technical Reports Server (NTRS)
Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James
1998-01-01
The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.
In vivo robotics: the automation of neuroscience and other intact-system biological fields
Kodandaramaiah, Suhasa B.; Boyden, Edward S.; Forest, Craig R.
2013-01-01
Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to impact neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience, and present a concrete example with our recent automation of in vivo whole cell patch clamp electrophysiology of neurons in the living mouse brain. PMID:23841584
ERIC Educational Resources Information Center
Nikelshpur, Dmitry O.
2014-01-01
Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…
Investigating Models of Social Development Using a Humanoid Robot
1998-01-01
robot interaction and cooper- and neural models of spinal motor neurons (Williamson ation (Takanishi, Hirano & Sato 1998, Morita, Shibuya 1996...etiology and behavioral manifestations of pervasive de- Individuals with autism tend to have normal sensory velopmental disorders such as autism and...grasp the implications of this information. Wlile interested in joint attention both as an explanation the deficits of autism certainly cover many other
Using sensor habituation in mobile robots to reduce oscillatory movements in narrow corridors.
Chang, Carolina
2005-11-01
Habituation is a form of nonassociative learning observed in a variety of species of animals. Arguably, it is the simplest form of learning. Nonetheless, the ability to habituate to certain stimuli implies plastic neural systems and adaptive behaviors. This paper describes how computational models of habituation can be applied to real robots. In particular, we discuss the problem of the oscillatory movements observed when a Khepera robot navigates through narrow hallways using a biologically inspired neurocontroller. Results show that habituation to the proximity of the walls can lead to smoother navigation. Habituation to sensory stimulation to the sides of the robot does not interfere with the robot's ability to turn at dead ends and to avoid obstacles outside the hallway. This paper shows that simple biological mechanisms of learning can be adapted to achieve better performance in real mobile robots.
Self mobile space manipulator project
NASA Technical Reports Server (NTRS)
Brown, H. Ben; Friedman, Mark; Xu, Yangsheng; Kanade, Takeo
1992-01-01
A relatively simple, modular, low mass, low cost robot is being developed for space EVA that is large enough to be independently mobile on a space station or platform exterior, yet versatile enough to accomplish many vital tasks. The robot comprises two long flexible links connected by a rotary joint, with 2-DOF 'wrist' joints and grippers at each end. It walks by gripping pre-positioned attachment points, such as trusswork nodes, and alternately shifting its base of support from one foot (gripper) to the other. The robot can perform useful tasks such as visual inspection, material transport, and light assembly by manipulating objects with one gripper, while stabilizing itself with the other. At SOAR '90, we reported development of 1/3 scale robot hardware, modular trusswork to serve as a locomotion substrate, and a gravity compensation system to allow laboratory tests of locomotion strategies on the horizontal face of the trusswork. In this paper, we report on project progress including the development of: (1) adaptive control for automatic adjustment to loads; (2) enhanced manipulation capabilities; (3) machine vision, including the use of neural nets, to guide autonomous locomotion; (4) locomotion between orthogonal trusswork faces; and (5) improved facilities for gravity compensation and telerobotic control.
Locomotion training of legged robots using hybrid machine learning techniques
NASA Technical Reports Server (NTRS)
Simon, William E.; Doerschuk, Peggy I.; Zhang, Wen-Ran; Li, Andrew L.
1995-01-01
In this study artificial neural networks and fuzzy logic are used to control the jumping behavior of a three-link uniped robot. The biped locomotion control problem is an increment of the uniped locomotion control. Study of legged locomotion dynamics indicates that a hierarchical controller is required to control the behavior of a legged robot. A structured control strategy is suggested which includes navigator, motion planner, biped coordinator and uniped controllers. A three-link uniped robot simulation is developed to be used as the plant. Neurocontrollers were trained both online and offline. In the case of on-line training, a reinforcement learning technique was used to train the neurocontroller to make the robot jump to a specified height. After several hundred iterations of training, the plant output achieved an accuracy of 7.4%. However, when jump distance and body angular momentum were also included in the control objectives, training time became impractically long. In the case of off-line training, a three-layered backpropagation (BP) network was first used with three inputs, three outputs and 15 to 40 hidden nodes. Pre-generated data were presented to the network with a learning rate as low as 0.003 in order to reach convergence. The low learning rate required for convergence resulted in a very slow training process which took weeks to learn 460 examples. After training, performance of the neurocontroller was rather poor. Consequently, the BP network was replaced by a Cerebeller Model Articulation Controller (CMAC) network. Subsequent experiments described in this document show that the CMAC network is more suitable to the solution of uniped locomotion control problems in terms of both learning efficiency and performance. A new approach is introduced in this report, viz., a self-organizing multiagent cerebeller model for fuzzy-neural control of uniped locomotion is suggested to improve training efficiency. This is currently being evaluated for a possible patent by NASA, Johnson Space Center. An alternative modular approach is also developed which uses separate controllers for each stage of the running stride. A self-organizing fuzzy-neural controller controls the height, distance and angular momentum of the stride. A CMAC-based controller controls the movement of the leg from the time the foot leaves the ground to the time of landing. Because the leg joints are controlled at each time step during flight, movement is smooth and obstacles can be avoided. Initial results indicate that this approach can yield fast, accurate results.
Adaptive robotic control driven by a versatile spiking cerebellar network.
Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Carrillo, Richard R; Luque, Niceto R; Ros, Eduardo; Pedrocchi, Alessandra; D'Angelo, Egidio
2014-01-01
The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.
Intelligent control and cooperation for mobile robots
NASA Astrophysics Data System (ADS)
Stingu, Petru Emanuel
The topic discussed in this work addresses the current research being conducted at the Automation & Robotics Research Institute in the areas of UAV quadrotor control and heterogenous multi-vehicle cooperation. Autonomy can be successfully achieved by a robot under the following conditions: the robot has to be able to acquire knowledge about the environment and itself, and it also has to be able to reason under uncertainty. The control system must react quickly to immediate challenges, but also has to slowly adapt and improve based on accumulated knowledge. The major contribution of this work is the transfer of the ADP algorithms from the purely theoretical environment to the complex real-world robotic platforms that work in real-time and in uncontrolled environments. Many solutions are adopted from those present in nature because they have been proven to be close to optimal in very different settings. For the control of a single platform, reinforcement learning algorithms are used to design suboptimal controllers for a class of complex systems that can be conceptually split in local loops with simpler dynamics and relatively weak coupling to the rest of the system. Optimality is enforced by having a global critic but the curse of dimensionality is avoided by using local actors and intelligent pre-processing of the information used for learning the optimal controllers. The system model is used for constructing the structure of the control system, but on top of that the adaptive neural networks that form the actors use the knowledge acquired during normal operation to get closer to optimal control. In real-world experiments, efficient learning is a strong requirement for success. This is accomplished by using an approximation of the system model to focus the learning for equivalent configurations of the state space. Due to the availability of only local data for training, neural networks with local activation functions are implemented. For the control of a formation of robots subjected to dynamic communication constraints, game theory is used in addition to reinforcement learning. The nodes maintain an extra set of state variables about all the other nodes that they can communicate to. The more important are trust and predictability. They are a way to incorporate knowledge acquired in the past into the control decisions taken by each node. The trust variable provides a simple mechanism for the implementation of reinforcement learning. For robot formations, potential field based control algorithms are used to generate the control commands. The formation structure changes due to the environment and due to the decisions of the nodes. It is a problem of building a graph and coalitions by having distributed decisions but still reaching an optimal behavior globally.
Adaptive Remote-Sensing Techniques Implementing Swarms of Mobile Agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asher, R.B.; Cameron, S.M.; Loubriel, G.M.
1998-11-25
In many situations, stand-off remote-sensing and hazard-interdiction techniques over realistic operational areas are often impractical "and difficult to characterize. An alternative approach is to implement an adap- tively deployable array of sensitive agent-specific devices. Our group has been studying the collective be- havior of an autonomous, multi-agent system applied to chedbio detection and related emerging threat applications, The current physics-based models we are using coordinate a sensor array for mukivanate sig- nal optimization and coverage as re,alized by a swarm of robots or mobile vehicles. These intelligent control systems integrate'glob"ally operating decision-making systems and locally cooperative learning neural net- worksmore » to enhance re+-timp operational responses to dynarnical environments examples of which include obstacle avoidance, res~onding to prevailing wind patterns, and overcoming other natural obscurants or in- terferences. Collectively',tkensor nefirons with simple properties, interacting according to basic community rules, can accomplish complex interconnecting functions such as generalization, error correction, pattern recognition, sensor fusion, and localization. Neural nets provide a greater degree of robusmess and fault tolerance than conventional systems in that minor variations or imperfections do not impair performance. The robotic platforms would be equipped with sensor devices that perform opticaI detection of biologicais in combination with multivariate chemical analysis tools based on genetic and neural network algorithms, laser-diode LIDAR analysis, ultra-wideband short-pulsed transmitting and receiving antennas, thermal im- a:ing sensors, and optical Communication technology providing robust data throughput pathways. Mission scenarios under consideration include ground penetrating radar (GPR) for detection of underground struc- tures, airborne systems, and plume migration and mitigation. We will describe our research in these areas anti give a status report on our progress.« less
Sale, Patrizio; Infarinato, Francesco; Del Percio, Claudio; Lizio, Roberta; Babiloni, Claudio; Foti, Calogero; Franceschini, Marco
2015-12-01
Stroke is the leading cause of permanent disability in developed countries; its effects may include sensory, motor, and cognitive impairment as well as a reduced ability to perform self-care and participate in social and community activities. A number of studies have shown that the use of robotic systems in upper limb motor rehabilitation programs provides safe and intensive treatment to patients with motor impairments because of a neurological injury. Furthermore, robot-aided therapy was shown to be well accepted and tolerated by all patients; however, it is not known whether a specific robot-aided rehabilitation can induce beneficial cortical plasticity in stroke patients. Here, we present a procedure to study neural underpinning of robot-aided upper limb rehabilitation in stroke patients. Neurophysiological recordings use the following: (a) 10-20 system electroencephalographic (EEG) electrode montage; (b) bipolar vertical and horizontal electrooculographies; and (c) bipolar electromyography from the operating upper limb. Behavior monitoring includes the following: (a) clinical data and (b) kinematic and dynamic of the operant upper limb movements. Experimental conditions include the following: (a) resting state eyes closed and eyes open, and (b) robotic rehabilitation task (maximum 80 s each block to reach 4-min EEG data; interblock pause of 1 min). The data collection is performed before and after a program of 30 daily rehabilitation sessions. EEG markers include the following: (a) EEG power density in the eyes-closed condition; (b) reactivity of EEG power density to eyes opening; and (c) reactivity of EEG power density to robotic rehabilitation task. The above procedure was tested on a subacute patient (29 poststroke days) and on a chronic patient (21 poststroke months). After the rehabilitation program, we observed (a) improved clinical condition; (b) improved performance during the robotic task; (c) reduced delta rhythms (1-4 Hz) and increased alpha rhythms (8-12 Hz) during the resting state eyes-closed condition; (d) increased alpha desynchronization to eyes opening; and (e) decreased alpha desynchronization during the robotic rehabilitation task. We conclude that the present procedure is suitable for evaluation of the neural underpinning of robot-aided upper limb rehabilitation.
On learning navigation behaviors for small mobile robots with reservoir computing architectures.
Antonelo, Eric Aislan; Schrauwen, Benjamin
2015-04-01
This paper proposes a general reservoir computing (RC) learning framework that can be used to learn navigation behaviors for mobile robots in simple and complex unknown partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) be fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior that can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on the examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using a hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors toward the goal.
Chen, Qihong; Long, Rong; Quan, Shuhai
2014-01-01
This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 1
NASA Technical Reports Server (NTRS)
Lea, Robert N. (Editor); Villarreal, James (Editor)
1991-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Houston, Clear Lake. The workshop was held April 11 to 13 at the Johnson Space Flight Center. Technical topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.
Natural and Artificial Intelligence, Language, Consciousness, Emotion, and Anticipation
NASA Astrophysics Data System (ADS)
Dubois, Daniel M.
2010-11-01
The classical paradigm of the neural brain as the seat of human natural intelligence is too restrictive. This paper defends the idea that the neural ectoderm is the actual brain, based on the development of the human embryo. Indeed, the neural ectoderm includes the neural crest, given by pigment cells in the skin and ganglia of the autonomic nervous system, and the neural tube, given by the brain, the spinal cord, and motor neurons. So the brain is completely integrated in the ectoderm, and cannot work alone. The paper presents fundamental properties of the brain as follows. Firstly, Paul D. MacLean proposed the triune human brain, which consists to three brains in one, following the species evolution, given by the reptilian complex, the limbic system, and the neo-cortex. Secondly, the consciousness and conscious awareness are analysed. Thirdly, the anticipatory unconscious free will and conscious free veto are described in agreement with the experiments of Benjamin Libet. Fourthly, the main section explains the development of the human embryo and shows that the neural ectoderm is the whole neural brain. Fifthly, a conjecture is proposed that the neural brain is completely programmed with scripts written in biological low-level and high-level languages, in a manner similar to the programmed cells by the genetic code. Finally, it is concluded that the proposition of the neural ectoderm as the whole neural brain is a breakthrough in the understanding of the natural intelligence, and also in the future design of robots with artificial intelligence.
Neurosurgery and the dawning age of Brain-Machine Interfaces
Rowland, Nathan C.; Breshears, Jonathan; Chang, Edward F.
2013-01-01
Brain–machine interfaces (BMIs) are on the horizon for clinical neurosurgery. Electrocorticography-based platforms are less invasive than implanted microelectrodes, however, the latter are unmatched in their ability to achieve fine motor control of a robotic prosthesis capable of natural human behaviors. These technologies will be crucial to restoring neural function to a large population of patients with severe neurologic impairment – including those with spinal cord injury, stroke, limb amputation, and disabling neuromuscular disorders such as amyotrophic lateral sclerosis. On the opposite end of the spectrum are neural enhancement technologies for specialized applications such as combat. An ongoing ethical dialogue is imminent as we prepare for BMI platforms to enter the neurosurgical realm of clinical management. PMID:23653884
Improving Cognitive Skills of the Industrial Robot
NASA Astrophysics Data System (ADS)
Bezák, Pavol
2015-08-01
At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.
In vivo robotics: the automation of neuroscience and other intact-system biological fields.
Kodandaramaiah, Suhasa B; Boyden, Edward S; Forest, Craig R
2013-12-01
Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to influence neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience and present a concrete example with our recent automation of in vivo whole-cell patch clamp electrophysiology of neurons in the living mouse brain. © 2013 New York Academy of Sciences.
Self-organization via active exploration in robotic applications. Phase 2: Hybrid hardware prototype
NASA Technical Reports Server (NTRS)
Oegmen, Haluk
1993-01-01
In many environments human-like intelligent behavior is required from robots to assist and/or replace human operators. The purpose of these robots is to reduce human time and effort in various tasks. Thus the robot should be robust and as autonomous as possible in order to eliminate or to keep to a strict minimum its maintenance and external control. Such requirements lead to the following properties: fault tolerance, self organization, and intelligence. A good insight into implementing these properties in a robot can be gained by considering human behavior. In the first phase of this project, a neural network architecture was developed that captures some fundamental aspects of human categorization, habit, novelty, and reinforcement behavior. The model, called FRONTAL, is a 'cognitive unit' regulating the exploratory behavior of the robot. In the second phase of the project, FRONTAL was interfaced with an off-the-shelf robotic arm and a real-time vision system. The components of this robotic system, a review of FRONTAL, and simulation studies are presented in this report.
The Structure, Design, and Closed-Loop Motion Control of a Differential Drive Soft Robot.
Wu, Pang; Jiangbei, Wang; Yanqiong, Fei
2018-02-01
This article presents the structure, design, and motion control of an inchworm inspired pneumatic soft robot, which can perform differential movement. This robot mainly consists of two columns of pneumatic multi-airbags (actuators), one sensor, one baseboard, front feet, and rear feet. According to the different inflation time of left and right actuators, the robot can perform both linear and turning movements. The actuators of this robot are composed of multiple airbags, and the design of the airbags is analyzed. To deal with the nonlinear performance of the soft robot, we use radial basis function neural networks to train the turning ability of this robot on three different surfaces and create a mathematical model among coefficient of friction, deflection angle, and inflation time. Then, we establish the closed-loop automatic control model using three-axis electronic compass sensor. Finally, the automatic control model is verified by linear and turning movement experiments. According to the experiment, the robot can finish the linear and turning movements under the closed-loop control system.
Bio-inspired grasp control in a robotic hand with massive sensorial input.
Ascari, Luca; Bertocchi, Ulisse; Corradi, Paolo; Laschi, Cecilia; Dario, Paolo
2009-02-01
The capability of grasping and lifting an object in a suitable, stable and controlled way is an outstanding feature for a robot, and thus far, one of the major problems to be solved in robotics. No robotic tools able to perform an advanced control of the grasp as, for instance, the human hand does, have been demonstrated to date. Due to its capital importance in science and in many applications, namely from biomedics to manufacturing, the issue has been matter of deep scientific investigations in both the field of neurophysiology and robotics. While the former is contributing with a profound understanding of the dynamics of real-time control of the slippage and grasp force in the human hand, the latter tries more and more to reproduce, or take inspiration by, the nature's approach, by means of hardware and software technology. On this regard, one of the major constraints robotics has to overcome is the real-time processing of a large amounts of data generated by the tactile sensors while grasping, which poses serious problems to the available computational power. In this paper a bio-inspired approach to tactile data processing has been followed in order to design and test a hardware-software robotic architecture that works on the parallel processing of a large amount of tactile sensing signals. The working principle of the architecture bases on the cellular nonlinear/neural network (CNN) paradigm, while using both hand shape and spatial-temporal features obtained from an array of microfabricated force sensors, in order to control the sensory-motor coordination of the robotic system. Prototypical grasping tasks were selected to measure the system performances applied to a computer-interfaced robotic hand. Successful grasps of several objects, completely unknown to the robot, e.g. soft and deformable objects like plastic bottles, soft balls, and Japanese tofu, have been demonstrated.
NASA Astrophysics Data System (ADS)
Narasimha Rao, Gudikandhula; Jagadeeswara Rao, Peddada; Duvvuru, Rajesh
2016-09-01
Wild fires have significant impact on atmosphere and lives. The demand of predicting exact fire area in forest may help fire management team by using drone as a robot. These are flexible, inexpensive and elevated-motion remote sensing systems that use drones as platforms are important for substantial data gaps and supplementing the capabilities of manned aircraft and satellite remote sensing systems. In addition, powerful computational tools are essential for predicting certain burned area in the duration of a forest fire. The reason of this study is to built up a smart system based on semantic neural networking for the forecast of burned areas. The usage of virtual reality simulator is used to support the instruction process of fire fighters and all users for saving of surrounded wild lives by using a naive method Semantic Neural Network System (SNNS). Semantics are valuable initially to have a enhanced representation of the burned area prediction and better alteration of simulation situation to the users. In meticulous, consequences obtained with geometric semantic neural networking is extensively superior to other methods. This learning suggests that deeper investigation of neural networking in the field of forest fires prediction could be productive.
Robotic Exoskeletons: A Perspective for the Rehabilitation of Arm Coordination in Stroke Patients
Jarrassé, Nathanaël; Proietti, Tommaso; Crocher, Vincent; Robertson, Johanna; Sahbani, Anis; Morel, Guillaume; Roby-Brami, Agnès
2014-01-01
Upper-limb impairment after stroke is caused by weakness, loss of individual joint control, spasticity, and abnormal synergies. Upper-limb movement frequently involves abnormal, stereotyped, and fixed synergies, likely related to the increased use of sub-cortical networks following the stroke. The flexible coordination of the shoulder and elbow joints is also disrupted. New methods for motor learning, based on the stimulation of activity-dependent neural plasticity have been developed. These include robots that can adaptively assist active movements and generate many movement repetitions. However, most of these robots only control the movement of the hand in space. The aim of the present text is to analyze the potential of robotic exoskeletons to specifically rehabilitate joint motion and particularly inter-joint coordination. First, a review of studies on upper-limb coordination in stroke patients is presented and the potential for recovery of coordination is examined. Second, issues relating to the mechanical design of exoskeletons and the transmission of constraints between the robotic and human limbs are discussed. The third section considers the development of different methods to control exoskeletons: existing rehabilitation devices and approaches to the control and rehabilitation of joint coordinations are then reviewed, along with preliminary clinical results available. Finally, perspectives and future strategies for the design of control mechanisms for rehabilitation exoskeletons are discussed. PMID:25520638
Guerrero, Carlos Rodriguez; Fraile Marinero, Juan Carlos; Turiel, Javier Perez; Muñoz, Victor
2013-11-01
Human motor performance, speed and variability are highly susceptible to emotional states. This paper reviews the impact of the emotions on the motor control performance, and studies the possibility of improving the perceived skill/challenge relation on a multimodal neural rehabilitation scenario, by means of a biocybernetic controller that modulates the assistance provided by a haptic controlled robot in reaction to undesirable physical and mental states. Results from psychophysiological, performance and self assessment data for closed loop experiments in contrast with their open loop counterparts, suggest that the proposed method had a positive impact on the overall challenge/skill relation leading to an enhanced physical human-robot interaction experience. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Using neural networks and Dyna algorithm for integrated planning, reacting and learning in systems
NASA Technical Reports Server (NTRS)
Lima, Pedro; Beard, Randal
1992-01-01
The traditional AI answer to the decision making problem for a robot is planning. However, planning is usually CPU-time consuming, depending on the availability and accuracy of a world model. The Dyna system generally described in earlier work, uses trial and error to learn a world model which is simultaneously used to plan reactions resulting in optimal action sequences. It is an attempt to integrate planning, reactive, and learning systems. The architecture of Dyna is presented. The different blocks are described. There are three main components of the system. The first is the world model used by the robot for internal world representation. The input of the world model is the current state and the action taken in the current state. The output is the corresponding reward and resulting state. The second module in the system is the policy. The policy observes the current state and outputs the action to be executed by the robot. At the beginning of program execution, the policy is stochastic and through learning progressively becomes deterministic. The policy decides upon an action according to the output of an evaluation function, which is the third module of the system. The evaluation function takes the following as input: the current state of the system, the action taken in that state, the resulting state, and a reward generated by the world which is proportional to the current distance from the goal state. Originally, the work proposed was as follows: (1) to implement a simple 2-D world where a 'robot' is navigating around obstacles, to learn the path to a goal, by using lookup tables; (2) to substitute the world model and Q estimate function Q by neural networks; and (3) to apply the algorithm to a more complex world where the use of a neural network would be fully justified. In this paper, the system design and achieved results will be described. First we implement the world model with a neural network and leave Q implemented as a look up table. Next, we use a lookup table for the world model and implement the Q function with a neural net. Time limitations prevented the combination of these two approaches. The final section discusses the results and gives clues for future work.
Mastinu, Enzo; Doguet, Pascal; Botquin, Yohan; Hakansson, Bo; Ortiz-Catalan, Max
2017-08-01
Despite the technological progress in robotics achieved in the last decades, prosthetic limbs still lack functionality, reliability, and comfort. Recently, an implanted neuromusculoskeletal interface built upon osseointegration was developed and tested in humans, namely the Osseointegrated Human-Machine Gateway. Here, we present an embedded system to exploit the advantages of this technology. Our artificial limb controller allows for bioelectric signals acquisition, processing, decoding of motor intent, prosthetic control, and sensory feedback. It includes a neurostimulator to provide direct neural feedback based on sensory information. The system was validated using real-time tasks characterization, power consumption evaluation, and myoelectric pattern recognition performance. Functionality was proven in a first pilot patient from whom results of daily usage were obtained. The system was designed to be reliably used in activities of daily living, as well as a research platform to monitor prosthesis usage and training, machine-learning-based control algorithms, and neural stimulation paradigms.
Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P
2017-03-01
In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.; Buck, A. A.; Smith, R.
1994-10-01
The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.
Villafañe, Jorge Hugo; Valdes, Kristin; Imperio, Grace; Borboni, Alberto; Cantero-Téllez, Raquel; Galeri, Silvia; Negrini, Stefano
2017-05-01
[Purpose] The aim of the present study is to detail the protocol for a randomised controlled trial (RCT) of neural manual vs. robotic assisted on pain in sensitivity as well as analyse the quantitative and qualitative movement of hand in subjects with hand osteoarthritis. [Subjects and Methods] Seventy-two patients, aged 50 to 90 years old of both genders, with a diagnosis of hand Osteoarthritis (OA), will be recruited. Two groups of 36 participants will receive an experimental intervention (neurodynamic mobilization intervention plus exercise) or a control intervention (robotic assisted passive mobilization plus exercise) for 12 sessions over 4 weeks. Assessment points will be at baseline, end of therapy, and 1 and 3 months after end of therapy. The outcomes of this intervention will be pain and determine the central pain processing mechanisms. [Result] Not applicable. [Conclusion] If there is a reduction in pain hypersensitivity in hand OA patients it can suggest that supraspinal pain-inhibitory areas, including the periaqueductal gray matter, can be stimulated by joint mobilization.
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Rieffel, John A.; Valero-Cuevas, Francisco J.; Lipson, Hod
2010-01-01
Traditional engineering approaches strive to avoid, or actively suppress, nonlinear dynamic coupling among components. Biological systems, in contrast, are often rife with these dynamics. Could there be, in some cases, a benefit to high degrees of dynamical coupling? Here we present a distributed robotic control scheme inspired by the biological phenomenon of tensegrity-based mechanotransduction. This emergence of morphology-as-information-conduit or ‘morphological communication’, enabled by time-sensitive spiking neural networks, presents a new paradigm for the decentralized control of large, coupled, modular systems. These results significantly bolster, both in magnitude and in form, the idea of morphological computation in robotic control. Furthermore, they lend further credence to ideas of embodied anatomical computation in biological systems, on scales ranging from cellular structures up to the tendinous networks of the human hand. PMID:19776146
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
From grid cells and visual place cells to multimodal place cell: a new robotic architecture
Jauffret, Adrien; Cuperlier, Nicolas; Gaussier, Philippe
2015-01-01
In the present study, a new architecture for the generation of grid cells (GC) was implemented on a real robot. In order to test this model a simple place cell (PC) model merging visual PC activity and GC was developed. GC were first built from a simple “several to one” projection (similar to a modulo operation) performed on a neural field coding for path integration (PI). Robotics experiments raised several practical and theoretical issues. To limit the important angular drift of PI, head direction information was introduced in addition to the robot proprioceptive signal coming from the wheel rotation. Next, a simple associative learning between visual place cells and the neural field coding for the PI has been used to recalibrate the PI and to limit its drift. Finally, the parameters controlling the shape of the PC built from the GC have been studied. Increasing the number of GC obviously improves the shape of the resulting place field. Yet, other parameters such as the discretization factor of PI or the lateral interactions between GC can have an important impact on the place field quality and avoid the need of a very large number of GC. In conclusion, our results show our GC model based on the compression of PI is congruent with neurobiological studies made on rodent. GC firing patterns can be the result of a modulo transformation of PI information. We argue that such a transformation may be a general property of the connectivity from the cortex to the entorhinal cortex. Our model predicts that the effect of similar transformations on other kinds of sensory information (visual, tactile, auditory, etc…) in the entorhinal cortex should be observed. Consequently, a given EC cell should react to non-contiguous input configurations in non-spatial conditions according to the projection from its different inputs. PMID:25904862
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
External Environment Sensing by a Module on Self-reconfiguration Robot
NASA Astrophysics Data System (ADS)
Goto, Tomotsugu; Uchida, Masafumi; Onogaki, Hitoshi
In the situation in which a robot and a human work together by collaborating with each other, a robot and a human share one working environment, and each interferes in each other. The boundary of each complex dynamic occupation area changes in the connection movement which is the component of collaborative works at this time. The main restraint condition which relates to the robustness of that connection movement is each physical charactristics, that is, the embodiment. A robot body is variability though the embodiment of a human is almost fixed. Therefore, the safe and the robust connection movement is brought when a robot has the robot body which is well suitable for the embodiment of a human. A purpose for this research is that the colaboration works between the self-reconfiguration robot and a human is realized. To achieve this purpose, sensing function of external environment on a module was examined. A module is a component of the self-reconfiguration robot. A robot body vibrates when a module actuates an arm actively. This vibration is observed by using some acceleration sensors. Measured datas reflects a difference of objects that it touches a robot body. In this paper, the sensing technique of external environment which identifies this difference by using the neural network is proposed.
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2017-01-01
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system. PMID:29311891
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions.
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2017-01-01
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as "not," "and," and "or" simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human-robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as "true," "false," and "not" work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word "and," which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word "or," which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854
[Informatics, robotics and medicine].
Carpentier, A
1999-01-01
Information technology is becoming common use in Medicine. Among the numerous applications are data processing, image analysis, 3D reconstruction, telemedicine, to mention only few of them. The interest of computers in surgical research and development is lesser known. Two examples are given: computer aided conception and simulation of physiologic systems. Robotics has been introduced more recently. There are three types of robotics corresponding to three types of use: targetting used by neural surgeons to localize tumors or anatomical structures, visualization used by general surgeons to hold and mobilize laparoscopes, instrumentation introduced more recently by cardiac surgeons to perform totally endoscopic cardiac operations. All these techniques open new ways for tomorrow "Instrumental Medicine".
He, Yongtian; Nathan, Kevin; Venkatakrishnan, Anusha; Rovekamp, Roger; Beck, Christopher; Ozdemir, Recep; Francisco, Gerard E; Contreras-Vidal, Jose L
2014-01-01
Stroke remains a leading cause of disability, limiting independent ambulation in survivors, and consequently affecting quality of life (QOL). Recent technological advances in neural interfacing with robotic rehabilitation devices are promising in the context of gait rehabilitation. Here, the X1, NASA's powered robotic lower limb exoskeleton, is introduced as a potential diagnostic, assistive, and therapeutic tool for stroke rehabilitation. Additionally, the feasibility of decoding lower limb joint kinematics and kinetics during walking with the X1 from scalp electroencephalographic (EEG) signals--the first step towards the development of a brain-machine interface (BMI) system to the X1 exoskeleton--is demonstrated.
An intelligent approach to welding robot selection
NASA Astrophysics Data System (ADS)
Milano, J.; Mauk, S. D.; Flitter, L.; Morris, R.
1993-10-01
In a shipyard where multiple stationary and mobile workcells are employed in the fabrication of components of complex sub-assemblies,efficient operation requires an intelligent method of scheduling jobs and selecting workcells based on optimum throughput and cost. The achievement of this global solution requires the successful organization of resource availability,process requirements,and process constraints. The Off-line Planner (OLP) of the Programmable Automated Weld Systemd (PAWS) is capable of advanced modeling of weld processes and environments as well as the generation of complete weld procedures. These capabilities involve the integration of advanced Computer Aided Design (CAD), path planning, and obstacle detection and avoidance techniques as well as the synthesis of complex design and process information. These existing capabilities provide the basis of the functionality required for the successful implementation of an intelligent weld robot selector and material flow planner. Current efforts are focused on robot selection via the dynamic routing of components to the appropriate work cells. It is proposed that this problem is a variant of the “Traveling Salesman Problem” (TSP) that has been proven to belong to a larger set of optimization problems termed nondeterministic polynomial complete (NP complete). In this paper, a heuristic approach utilizing recurrent neural networks is explored as a rapid means of producing a near optimal, if not optimal, bdweld robot selection.
Artificial Neural Network Based Mission Planning Mechanism for Spacecraft
NASA Astrophysics Data System (ADS)
Li, Zhaoyu; Xu, Rui; Cui, Pingyuan; Zhu, Shengying
2018-04-01
The ability to plan and react fast in dynamic space environments is central to intelligent behavior of spacecraft. For space and robotic applications, many planners have been used. But it is difficult to encode the domain knowledge and directly use existing techniques such as heuristic to improve the performance of the application systems. Therefore, regarding planning as an advanced control problem, this paper first proposes an autonomous mission planning and action selection mechanism through a multiple layer perceptron neural network approach to select actions in planning process and improve efficiency. To prove the availability and effectiveness, we use autonomous mission planning problems of the spacecraft, which is a sophisticated system with complex subsystems and constraints as an example. Simulation results have shown that artificial neural networks (ANNs) are usable for planning problems. Compared with the existing planning method in EUROPA, the mechanism using ANNs is more efficient and can guarantee stable performance. Therefore, the mechanism proposed in this paper is more suitable for planning problems of spacecraft that require real time and stability.
A closed-loop neurobotic system for fine touch sensing
NASA Astrophysics Data System (ADS)
Bologna, L. L.; Pinoteau, J.; Passot, J.-B.; Garrido, J. A.; Vogel, J.; Ros Vidal, E.; Arleo, A.
2013-08-01
Objective. Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. Approach. The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. Main results. Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. Significance. This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.
Seepanomwan, Kristsana; Caligiore, Daniele; Cangelosi, Angelo; Baldassarre, Gianluca
2015-12-01
Mental rotation, a classic experimental paradigm of cognitive psychology, tests the capacity of humans to mentally rotate a seen object to decide if it matches a target object. In recent years, mental rotation has been investigated with brain imaging techniques to identify the brain areas involved. Mental rotation has also been investigated through the development of neural-network models, used to identify the specific mechanisms that underlie its process, and with neurorobotics models to investigate its embodied nature. Current models, however, have limited capacities to relate to neuro-scientific evidence, to generalise mental rotation to new objects, to suitably represent decision making mechanisms, and to allow the study of the effects of overt gestures on mental rotation. The work presented in this study overcomes these limitations by proposing a novel neurorobotic model that has a macro-architecture constrained by knowledge held on brain, encompasses a rather general mental rotation mechanism, and incorporates a biologically plausible decision making mechanism. The model was tested using the humanoid robot iCub in tasks requiring the robot to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot gained an enhanced capacity to generalise mental rotation to new objects and to express the possible effects of overt movements of the wrist on mental rotation. The model also represents a further step in the identification of the embodied neural mechanisms that may underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Upper limb robotics applied to neurorehabilitation: An overview of clinical practice.
Duret, Christophe; Mazzoleni, Stefano
2017-01-01
During the last two decades, extensive interaction between clinicians and engineers has led to the development of systems that stimulate neural plasticity to optimize motor recovery after neurological lesions. This has resulted in the expansion of the field of robotics for rehabilitation. Studies in patients with stroke-related upper-limb paresis have shown that robotic rehabilitation can improve motor capacity. However, few other applications have been evaluated (e.g. tremor, peripheral nerve injuries or other neurological diseases). This paper presents an overview of the current use of upper limb robotic systems for neurorehabilitation, and highlights the rationale behind their use for the assessment and treatment of common neurological disorders. Rehabilitation robots are little integrated in clinical practice, except after stroke. Although few studies have been carried out to evaluate their effectiveness, evidence from the neurosciences and indications from pilot studies suggests that upper limb robotic rehabilitation can be applied safely in various other neurological conditions. Rehabilitation robots provide an intensity, quality and dose of treatment that exceeds therapist-mediated rehabilitation. Moreover, the use of force fields, multi-sensory environments, feedback etc. renders such rehabilitation engaging and motivating. Future studies should evaluate the effectiveness of rehabilitation robots in neurological pathologies other than stroke.
Sliding Mode Control (SMC) of Robot Manipulator via Intelligent Controllers
NASA Astrophysics Data System (ADS)
Kapoor, Neha; Ohri, Jyoti
2017-02-01
Inspite of so much research, key technical problem, naming chattering of conventional, simple and robust SMC is still a challenge to the researchers and hence limits its practical application. However, newly developed soft computing based techniques can provide solution. In order to have advantages of conventional and heuristic soft computing based control techniques, in this paper various commonly used intelligent techniques, neural network, fuzzy logic and adaptive neuro fuzzy inference system (ANFIS) have been combined with sliding mode controller (SMC). For validation, proposed hybrid control schemes have been implemented for tracking a predefined trajectory by robotic manipulator, incorporating structured and unstructured uncertainties in the system. After reviewing numerous papers, all the commonly occurring uncertainties like continuous disturbance, uniform random white noise, static friction like coulomb friction and viscous friction, dynamic friction like Dhal friction and LuGre friction have been inserted in the system. Various performance indices like norm of tracking error, chattering in control input, norm of input torque, disturbance rejection, chattering rejection have been used. Comparative results show that with almost eliminated chattering the intelligent SMC controllers are found to be more efficient over simple SMC. It has also been observed from results that ANFIS based controller has the best tracking performance with the reduced burden on the system. No paper in the literature has found to have all these structured and unstructured uncertainties together for motion control of robotic manipulator.
Grounding the Meanings in Sensorimotor Behavior using Reinforcement Learning
Farkaš, Igor; Malík, Tomáš; Rebrová, Kristína
2012-01-01
The recent outburst of interest in cognitive developmental robotics is fueled by the ambition to propose ecologically plausible mechanisms of how, among other things, a learning agent/robot could ground linguistic meanings in its sensorimotor behavior. Along this stream, we propose a model that allows the simulated iCub robot to learn the meanings of actions (point, touch, and push) oriented toward objects in robot’s peripersonal space. In our experiments, the iCub learns to execute motor actions and comment on them. Architecturally, the model is composed of three neural-network-based modules that are trained in different ways. The first module, a two-layer perceptron, is trained by back-propagation to attend to the target position in the visual scene, given the low-level visual information and the feature-based target information. The second module, having the form of an actor-critic architecture, is the most distinguishing part of our model, and is trained by a continuous version of reinforcement learning to execute actions as sequences, based on a linguistic command. The third module, an echo-state network, is trained to provide the linguistic description of the executed actions. The trained model generalizes well in case of novel action-target combinations with randomized initial arm positions. It can also promptly adapt its behavior if the action/target suddenly changes during motor execution. PMID:22393319
Kim, Ha Yeon; Yang, Sung Phil; Park, Gyu Lee; Kim, Eun Joo; You, Joshua Sung Hyun
2016-01-01
Robot-assisted and treadmill-gait training are promising neurorehabilitation techniques, with advantages over conventional gait training, but the neural substrates underpinning locomotor control remain unknown particularly during different gait training modes and speeds. The present optical imaging study compared cortical activities during conventional stepping walking (SW), treadmill walking (TW), and robot-assisted walking (RW) at different speeds. Fourteen healthy subjects (6 women, mean age 30.06, years ± 4.53) completed three walking training modes (SW, TW, and RW) at various speeds (self-selected, 1.5, 2.0, 2.5, and 3.0 km/h). A functional near-infrared spectroscopy (fNIRS) system determined cerebral hemodynamic changes associated with cortical locomotor network areas in the primary sensorimotor cortex (SMC), premotor cortex (PMC), supplementary motor area (SMA), prefrontal cortex (PFC), and sensory association cortex (SAC). There was increased cortical activation in the SMC, PMC, and SMA during different walking training modes. More global locomotor network activation was observed during RW than TW or SW. As walking speed increased, multiple locomotor network activations were observed, and increased activation power spectrum. This is the first empirical evidence highlighting the neural substrates mediating dynamic locomotion for different gait training modes and speeds. Fast, robot-assisted gait training best facilitated cortical activation associated with locomotor control.
Gandarias, Juan M; Gómez-de-Gabriel, Jesús M; García-Cerezo, Alfonso J
2018-02-26
The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.
Observation-based training for neuroprosthetic control of grasping by amputees.
Agashe, Harshavardhan A; Contreras-Vidal, Jose L
2014-01-01
Current brain-machine interfaces (BMIs) allow upper limb amputees to position robotic arms with a high degree of accuracy, but lack the ability to control hand pre-shaping for grasping different objects. We have previously shown that low frequency (0.1-1 Hz) time domain cortical activity recorded at the scalp via electroencephalography (EEG) encodes information about grasp pre-shaping. To transfer this technology to clinical populations such as amputees, the challenge lies in constructing BMI models in the absence of overt training hand movements. Here we show that it is possible to train BMI models using observed grasping movements performed by a robotic hand attached to amputees' residual limb. Three transradial amputees controlled the grasping motion of an attached robotic hand via their EEG, following the action-observation training phase. Over multiple sessions, subjects successfully grasped the presented object (a bottle or a credit card) in 53±16 % of trials, demonstrating the validity of the BMI models. Importantly, the validation of the BMI model was through closed-loop performance, which demonstrates generalization of the model to unseen data. These results suggest `mirror neuron system' properties captured by delta band EEG that allows neural representation for action observation to be used for action control in an EEG-based BMI system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thangavelautham, Jekanthan; Smith, Alexander; Abu El Samid, Nader
Automation of site preparation and resource utilization on the Moon with teams of autonomous robots holds considerable promise for establishing a lunar base. Such multirobot autonomous systems would require limited human support infrastructure, complement necessary manned operations and reduce overall mission risk. We present an Artificial Neural Tissue (ANT) architecture as a control system for autonomous multirobot excavation tasks. An ANT approach requires much less human supervision and pre-programmed human expertise than previous techniques. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to 'breed' controllersmore » for the task at hand in simulation and the fittest controllers are transferred onto hardware for further validation and testing. ANT facilitates 'machine creativity', with the emergence of novel functionality through a process of self-organized task decomposition of mission goals. ANT based controllers are shown to exhibit self-organization, employ stigmergy (communication mediated through the environment) and make use of templates (unlabeled environmental cues). With lunar in-situ resource utilization (ISRU) efforts in mind, ANT controllers have been tested on a multirobot excavation task in which teams of robots with no explicit supervision can successfully avoid obstacles, interpret excavation blueprints, perform layered digging, avoid burying or trapping other robots and clear/maintain digging routes.« less
NASA Astrophysics Data System (ADS)
Jensen, Winnie; Rousche, Patrick J.
2006-03-01
The success of a cortical motor neuroprosthetic system will rely on the system's ability to effectively execute complex motor tasks in a changing environment. Invasive, intra-cortical electrodes have been successfully used to predict joint movement and grip force of a robotic arm/hand with a non-human primate (Chapin J K, Moxon K A, Markowitz R S and Nicolelis M A L 1999 Real-time control of a robotic arm using simultaneously recorded neurons in the motor cortex Nat. Neurosci. 2 664-70). It is well known that cortical encoding occurs with a high degree of cortical plasticity and depends on both the functional and behavioral context. Questions on the expected robustness of future motor prosthesis systems therefore still remain. The objective of the present work was to study the effect of minor changes in functional movement strategies on the M1 encoding. We compared the M1 encoding in freely moving, non-constrained animals that performed two similar behavioral tasks with the same end-goal, and investigated if these behavioral tasks could be discriminated based on the M1 recordings. The rats depressed a response paddle either with a set of restrictive bars ('WB') or without the bars ('WOB') placed in front of the paddle. The WB task required changes in the motor strategy to complete the paddle press and resulted in highly stereotyped movements, whereas in the WOB task the movement strategy was not restricted. Neural population activity was recorded from 16-channel micro-wire arrays and data up to 200 ms before a paddle hit were analyzed off-line. The analysis showed a significant neural firing difference between the two similar WB and WOB tasks, and using principal component analysis it was possible to distinguish between the two tasks with a best classification at 76.6%. While the results are dependent upon a small, randomly sampled neural population, they indicate that information about similar behavioral tasks may be extracted from M1 based on relatively few channels of neural signal for possible use in a cortical neuroprosthetic system.
Continuum robot arms inspired by cephalopods
NASA Astrophysics Data System (ADS)
Walker, Ian D.; Dawson, Darren M.; Flash, Tamar; Grasso, Frank W.; Hanlon, Roger T.; Hochner, Binyamin; Kier, William M.; Pagano, Christopher C.; Rahn, Christopher D.; Zhang, Qiming M.
2005-05-01
In this paper, we describe our recent results in the development of a new class of soft, continuous backbone ("continuum") robot manipulators. Our work is strongly motivated by the dexterous appendages found in cephalopods, particularly the arms and suckers of octopus, and the arms and tentacles of squid. Our ongoing investigation of these animals reveals interesting and unexpected functional aspects of their structure and behavior. The arrangement and dynamic operation of muscles and connective tissue observed in the arms of a variety of octopus species motivate the underlying design approach for our soft manipulators. These artificial manipulators feature biomimetic actuators, including artificial muscles based on both electro-active polymers (EAP) and pneumatic (McKibben) muscles. They feature a "clean" continuous backbone design, redundant degrees of freedom, and exhibit significant compliance that provides novel operational capacities during environmental interaction and object manipulation. The unusual compliance and redundant degrees of freedom provide strong potential for application to delicate tasks in cluttered and/or unstructured environments. Our aim is to endow these compliant robotic mechanisms with the diverse and dexterous grasping behavior observed in octopuses. To this end, we are conducting fundamental research into the manipulation tactics, sensory biology, and neural control of octopuses. This work in turn leads to novel approaches to motion planning and operator interfaces for the robots. The paper describes the above efforts, along with the results of our development of a series of continuum tentacle-like robots, demonstrating the unique abilities of biologically-inspired design.
Dynamic traversal of large gaps by insects and legged robots reveals a template.
Gart, Sean W; Yan, Changxin; Othayoth, Ratan; Ren, Zhiyi; Li, Chen
2018-02-02
It is well known that animals can use neural and sensory feedback via vision, tactile sensing, and echolocation to negotiate obstacles. Similarly, most robots use deliberate or reactive planning to avoid obstacles, which relies on prior knowledge or high-fidelity sensing of the environment. However, during dynamic locomotion in complex, novel, 3D terrains, such as a forest floor and building rubble, sensing and planning suffer bandwidth limitation and large noise and are sometimes even impossible. Here, we study rapid locomotion over a large gap-a simple, ubiquitous obstacle-to begin to discover the general principles of the dynamic traversal of large 3D obstacles. We challenged the discoid cockroach and an open-loop six-legged robot to traverse a large gap of varying length. Both the animal and the robot could dynamically traverse a gap as large as one body length by bridging the gap with its head, but traversal probability decreased with gap length. Based on these observations, we developed a template that accurately captured body dynamics and quantitatively predicted traversal performance. Our template revealed that a high approach speed, initial body pitch, and initial body pitch angular velocity facilitated dynamic traversal, and successfully predicted a new strategy for using body pitch control that increased the robot's maximal traversal gap length by 50%. Our study established the first template of dynamic locomotion beyond planar surfaces, and is an important step in expanding terradynamics into complex 3D terrains.
NASA Astrophysics Data System (ADS)
Zamora Ramos, Ernesto
Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.
Interfacing insect brain for space applications.
Di Pino, Giovanni; Seidl, Tobias; Benvenuto, Antonella; Sergi, Fabrizio; Campolo, Domenico; Accoto, Dino; Maria Rossini, Paolo; Guglielmelli, Eugenio
2009-01-01
Insects exhibit remarkable navigation capabilities that current control architectures are still far from successfully mimic and reproduce. In this chapter, we present the results of a study on conceptualizing insect/machine hybrid controllers for improving autonomy of exploratory vehicles. First, the different principally possible levels of interfacing between insect and machine are examined followed by a review of current approaches towards hybridity and enabling technologies. Based on the insights of this activity, we propose a double hybrid control architecture which hinges around the concept of "insect-in-a-cockpit." It integrates both biological/artificial (insect/robot) modules and deliberative/reactive behavior. The basic assumption is that "low-level" tasks are managed by the robot, while the "insect intelligence" is exploited whenever high-level problem solving and decision making is required. Both neural and natural interfacing have been considered to achieve robustness and redundancy of exchanged information.
Szczecinski, Nicholas S.; Hunt, Alexander J.; Quinn, Roger D.
2017-01-01
A dynamical model of an animal’s nervous system, or synthetic nervous system (SNS), is a potentially transformational control method. Due to increasingly detailed data on the connectivity and dynamics of both mammalian and insect nervous systems, controlling a legged robot with an SNS is largely a problem of parameter tuning. Our approach to this problem is to design functional subnetworks that perform specific operations, and then assemble them into larger models of the nervous system. In this paper, we present networks that perform addition, subtraction, multiplication, division, differentiation, and integration of incoming signals. Parameters are set within each subnetwork to produce the desired output by utilizing the operating range of neural activity, R, the gain of the operation, k, and bounds based on biological values. The assembly of large networks from functional subnetworks underpins our recent results with MantisBot. PMID:28848419
NASA Astrophysics Data System (ADS)
Simeral, J. D.; Kim, S.-P.; Black, M. J.; Donoghue, J. P.; Hochberg, L. R.
2011-04-01
The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor.
Simeral, J D; Kim, S-P; Black, M J; Donoghue, J P; Hochberg, L R
2013-01-01
The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor. PMID:21436513
Information at the edge of chaos in fluid neural networks
NASA Astrophysics Data System (ADS)
Solé, Ricard V.; Miramontes, Octavio
1995-01-01
Fluid neural networks, defined as neural nets of mobile elements with random activation, are studied by means of several approaches. They are proposed as a theoretical framework for a wide class of systems as insect societies, collectives of robots or the immune system. The critical properties of this model are also analysed, showing the existence of a critical boundary in parameter space where maximum information transfer occurs. In this sense, this boundary is in fact an example of the “edge of chaos” in systems like those described in our approach. Recent experiments with ant colonies seem to confirm our result.
Computational neural learning formalisms for manipulator inverse kinematics
NASA Technical Reports Server (NTRS)
Gulati, Sandeep; Barhen, Jacob; Iyengar, S. Sitharama
1989-01-01
An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints.
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2
NASA Technical Reports Server (NTRS)
Culbert, Christopher J. (Editor)
1993-01-01
Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.
Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony
2010-01-01
This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503
Development of a sensor coordinated kinematic model for neural network controller training
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.
Fault detection and isolation for complex system
NASA Astrophysics Data System (ADS)
Jing, Chan Shi; Bayuaji, Luhur; Samad, R.; Mustafa, M.; Abdullah, N. R. H.; Zain, Z. M.; Pebrianti, Dwi
2017-07-01
Fault Detection and Isolation (FDI) is a method to monitor, identify, and pinpoint the type and location of system fault in a complex multiple input multiple output (MIMO) non-linear system. A two wheel robot is used as a complex system in this study. The aim of the research is to construct and design a Fault Detection and Isolation algorithm. The proposed method for the fault identification is using hybrid technique that combines Kalman filter and Artificial Neural Network (ANN). The Kalman filter is able to recognize the data from the sensors of the system and indicate the fault of the system in the sensor reading. Error prediction is based on the fault magnitude and the time occurrence of fault. Additionally, Artificial Neural Network (ANN) is another algorithm used to determine the type of fault and isolate the fault in the system.
Treadmill vs. overground walking: different response to physical interaction.
Ochoa, Julieth; Sternad, Dagmar; Hogan, Neville
2017-10-01
Rehabilitation of human motor function is an issue of growing significance, and human-interactive robots offer promising potential to meet the need. For the lower extremity, however, robot-aided therapy has proven challenging. To inform effective approaches to robotic gait therapy, it is important to better understand unimpaired locomotor control: its sensitivity to different mechanical contexts and its response to perturbations. The present study evaluated the behavior of 14 healthy subjects who walked on a motorized treadmill and overground while wearing an exoskeletal ankle robot. Their response to a periodic series of ankle plantar flexion torque pulses, delivered at periods different from, but sufficiently close to, their preferred stride cadence, was assessed to determine whether gait entrainment occurred, how it differed across conditions, and if the adapted motor behavior persisted after perturbation. Certain aspects of locomotor control were exquisitely sensitive to walking context, while others were not. Gaits entrained more often and more rapidly during overground walking, yet, in all cases, entrained gaits synchronized the torque pulses with ankle push-off, where they provided assistance with propulsion. Furthermore, subjects entrained to perturbation periods that required an adaption toward slower cadence, even though the pulses acted to accelerate gait, indicating a neural adaptation of locomotor control. Lastly, during 15 post-perturbation strides, the entrained gait period was observed to persist more frequently during overground walking. This persistence was correlated with the number of strides walked at the entrained gait period (i.e., longer exposure), which also indicated a neural adaptation. NEW & NOTEWORTHY We show that the response of human locomotion to physical interaction differs between treadmill and overground walking. Subjects entrained to a periodic series of ankle plantar flexion torque pulses that shifted their gait cadence, synchronizing ankle push-off with the pulses (so that they assisted propulsion) even when gait cadence slowed. Entrainment was faster overground and, on removal of torque pulses, the entrained gait period persisted more prominently overground, indicating a neural adaptation of locomotor control. Copyright © 2017 the American Physiological Society.
A Symbiotic Brain-Machine Interface through Value-Based Decision Making
Mahmoudi, Babak; Sanchez, Justin C.
2011-01-01
Background In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). Methodology The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. Conclusions Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain. PMID:21423797
A Spiking Neural Network in sEMG Feature Extraction.
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-11-03
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
Intelligent Vision Systems Independent Research and Development (IR&D) 2006
NASA Technical Reports Server (NTRS)
Patrick, Clinton; Chavis, Katherine
2006-01-01
This report summarizes results in conduct of research sponsored by the 2006 Independent Research and Development (IR&D) program at Marshall Space Flight Center (MSFC) at Redstone Arsenal, Alabama. The focus of this IR&D is neural network (NN) technology provided by Imagination Engines, Incorporated (IEI) of St. Louis, Missouri. The technology already has many commercial, military, and governmental applications, and a rapidly growing list of other potential spin-offs. The goal for this IR&D is implementation and demonstration of the technology for autonomous robotic operations, first in software and ultimately in one or more hardware realizations. Testing is targeted specifically to the MSFC Flat Floor, but may also include other robotic platforms at MSFC, as time and funds permit. For the purpose of this report, the NN technology will be referred to by IEI's designation for a subset configuration of its patented technology suite: Self-Training Autonomous Neural Network Object (STANNO).
New insights into olivo-cerebellar circuits for learning from a small training sample.
Tokuda, Isao T; Hoang, Huu; Kawato, Mitsuo
2017-10-01
Artificial intelligence such as deep neural networks exhibited remarkable performance in simulated video games and 'Go'. In contrast, most humanoid robots in the DARPA Robotics Challenge fell down to ground. The dramatic contrast in performance is mainly due to differences in the amount of training data, which is huge and small, respectively. Animals are not allowed with millions of the failed trials, which lead to injury and death. Humans fall only several thousand times before they balance and walk. We hypothesize that a unique closed-loop neural circuit formed by the Purkinje cells, the cerebellar deep nucleus and the inferior olive in and around the cerebellum and the highest density of gap junctions, which regulate synchronous activities of the inferior olive nucleus, are computational machinery for learning from a small sample. We discuss recent experimental and computational advances associated with this hypothesis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lateral specialization in unilateral spatial neglect: a cognitive robotics model.
Conti, Daniela; Di Nuovo, Santo; Cangelosi, Angelo; Di Nuovo, Alessandro
2016-08-01
In this paper, we present the experimental results of an embodied cognitive robotic approach for modelling the human cognitive deficit known as unilateral spatial neglect (USN). To this end, we introduce an artificial neural network architecture designed and trained to control the spatial attentional focus of the iCub robotic platform. Like the human brain, the architecture is divided into two hemispheres and it incorporates bio-inspired plasticity mechanisms, which allow the development of the phenomenon of the specialization of the right hemisphere for spatial attention. In this study, we validate the model by replicating a previous experiment with human patients affected by the USN and numerical results show that the robot mimics the behaviours previously exhibited by humans. We also simulated recovery after the damage to compare the performance of each of the two hemispheres as additional validation of the model. Finally, we highlight some possible advantages of modelling cognitive dysfunctions of the human brain by means of robotic platforms, which can supplement traditional approaches for studying spatial impairments in humans.
Brain-machine interfaces for controlling lower-limb powered robotic systems.
He, Yongtian; Eguren, David; Azorín, José M; Grossman, Robert G; Luu, Trieu Phat; Contreras-Vidal, Jose L
2018-04-01
Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.
Brain-machine interfaces for controlling lower-limb powered robotic systems
NASA Astrophysics Data System (ADS)
He, Yongtian; Eguren, David; Azorín, José M.; Grossman, Robert G.; Phat Luu, Trieu; Contreras-Vidal, Jose L.
2018-04-01
Objective. Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. Approach. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Main results. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. Significance. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.
Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure
NASA Astrophysics Data System (ADS)
Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori
In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.
Designing and implementing nervous system simulations on LEGO robots.
Blustein, Daniel; Rosenthal, Nikolai; Ayers, Joseph
2013-05-25
We present a method to use the commercially available LEGO Mindstorms NXT robotics platform to test systems level neuroscience hypotheses. The first step of the method is to develop a nervous system simulation of specific reflexive behaviors of an appropriate model organism; here we use the American Lobster. Exteroceptive reflexes mediated by decussating (crossing) neural connections can explain an animal's taxis towards or away from a stimulus as described by Braitenberg and are particularly well suited for investigation using the NXT platform.(1) The nervous system simulation is programmed using LabVIEW software on the LEGO Mindstorms platform. Once the nervous system is tuned properly, behavioral experiments are run on the robot and on the animal under identical environmental conditions. By controlling the sensory milieu experienced by the specimens, differences in behavioral outputs can be observed. These differences may point to specific deficiencies in the nervous system model and serve to inform the iteration of the model for the particular behavior under study. This method allows for the experimental manipulation of electronic nervous systems and serves as a way to explore neuroscience hypotheses specifically regarding the neurophysiological basis of simple innate reflexive behaviors. The LEGO Mindstorms NXT kit provides an affordable and efficient platform on which to test preliminary biomimetic robot control schemes. The approach is also well suited for the high school classroom to serve as the foundation for a hands-on inquiry-based biorobotics curriculum.
Park, Gibeom; Tani, Jun
2015-12-01
The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.
Neural network representation and learning of mappings and their derivatives
NASA Technical Reports Server (NTRS)
White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald
1991-01-01
Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.
A biologically inspired neural net for trajectory formation and obstacle avoidance.
Glasius, R; Komoda, A; Gielen, S C
1996-06-01
In this paper we present a biologically inspired two-layered neural network for trajectory formation and obstacle avoidance. The two topographically ordered neural maps consist of analog neurons having continuous dynamics. The first layer, the sensory map, receives sensory information and builds up an activity pattern which contains the optimal solution (i.e. shortest path without collisions) for any given set of current position, target positions and obstacle positions. Targets and obstacles are allowed to move, in which case the activity pattern in the sensory map will change accordingly. The time evolution of the neural activity in the second layer, the motor map, results in a moving cluster of activity, which can be interpreted as a population vector. Through the feedforward connections between the two layers, input of the sensory map directs the movement of the cluster along the optimal path from the current position of the cluster to the target position. The smooth trajectory is the result of the intrinsic dynamics of the network only. No supervisor is required. The output of the motor map can be used for direct control of an autonomous system in a cluttered environment or for control of the actuators of a biological limb or robot manipulator. The system is able to reach a target even in the presence of an external perturbation. Computer simulations of a point robot and a multi-joint manipulator illustrate the theory.
Third Conference on Artificial Intelligence for Space Applications, part 1
NASA Technical Reports Server (NTRS)
Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)
1987-01-01
The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed.
Hussain, Irfan; Santarnecchi, Emiliano; Leo, Andrea; Ricciardi, Emiliano; Rossi, Simone; Prattichizzo, Domenico
2017-07-01
The Supernumerary robotic limbs are a recently introduced class of wearable robots that, differently from traditional prostheses and exoskeletons, aim at adding extra effectors (i.e., arms, legs, or fingers) to the human user, rather than substituting or enhancing the natural ones. However, it is still undefined whether the use of supernumerary robotic limbs could specifically lead to neural modifications in brain dynamics. The illusion of owning the part of body has been already proven in many experimental observations, such as those relying on multisensory integration (e.g., rubber hand illusion), prosthesis and even on virtual reality. In this paper we present a description of a novel magnetic compatible supernumerary robotic finger together with preliminary observations from two functional magnetic resonance imaging (fMRI) experiments, in which brain activity was measured before and after a period of training with the robotic device, and during the use of the novel MRI-compatible version of the supernumerary robotic finger. Results showed that the usage of the MR-compatible robotic finger is safe and does not produce artifacts on MRI images. Moreover, the training with the supernumerary robotic finger recruits a network of motor-related cortical regions (i.e. primary and supplementary motor areas), hence the same motor network of a fully physiological voluntary motor gestures.
Minimalistic toy robot to analyze a scenery of speaker-listener condition in autism.
Giannopulu, Irini; Montreynaud, Valérie; Watanabe, Tomio
2016-05-01
Atypical neural architecture causes impairment in communication capabilities and reduces the ability of representing the referential statements of other people in children with autism. During a scenery of "speaker-listener" communication, we have analyzed verbal and emotional expressions in neurotypical children (n = 20) and in children with autism (n = 20). The speaker was always a child, and the listener was a human or a minimalistic robot which reacts to speech expression by nodding only. Although both groups performed the task, everything happens as if the robot could allow children with autism to elaborate a multivariate equation encoding and conceptualizing within his/her brain, and externalizing into unconscious emotion (heart rate) and conscious verbal speech (words). Such a behavior would indicate that minimalistic artificial environments such as toy robots could be considered as the root of neuronal organization and reorganization with the potential to improve brain activity.
Study of robot landmark recognition with complex background
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Yang, Jia
2007-12-01
It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.
From self-assessment to frustration, a small step toward autonomy in robotic navigation
Jauffret, Adrien; Cuperlier, Nicolas; Tarroux, Philippe; Gaussier, Philippe
2013-01-01
Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior's quality, from a given fitness system in order to take correct decisions. In this work, we focus on how a second-order controller can be used to (1) manage behaviors according to the situation and (2) seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an on-line novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system) based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations. We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation) in different situations. PMID:24115931
The embodiment of assistive devices-from wheelchair to exoskeleton
NASA Astrophysics Data System (ADS)
Pazzaglia, Mariella; Molinari, Marco
2016-03-01
Spinal cord injuries (SCIs) place a heavy burden on the healthcare system and have a high personal impact and marked socio-economic consequences. Clinically, no absolute cure for these conditions exists. However, in recent years, there has been an increased focus on new robotic technologies that can change the frame we think about the prognosis for recovery and for treating some functions of the body affected after SCIs. This review has two goals. The first is to assess the possibility of the embodiment of functional assistive tools after traumatic disruption of the neural pathways between the brain and the body. To this end, we will examine how altered sensorimotor information modulates the sense of the body in SCI. The second goal is to map the phenomenological experience of using external tools that typically extend the potential of the body physically impaired by SCI. More specifically, we will focus on the difference between the perception of one's physically augmented and non-augmented affected body based on observable and measurable behaviors. We discuss potential clinical benefits of enhanced embodiment of the external objects by way of multisensory interventions. This review argues that the future evolution of human robotic technologies will require adopting an embodied approach, taking advantage of brain plasticity to allow bionic limbs to be mapped within the neural circuits of physically impaired individuals.
Understanding the internal states of others by listening to action verbs.
Di Cesare, G; Fasano, F; Errante, A; Marchi, M; Rizzolatti, G
2016-08-01
The internal state of others can be understood observing their actions or listening to their voice. While the neural bases of action style (vitality forms) have been investigated, there is no information on how we recognize others' internal state by listening to their speech. Here, using fMRI technique, we investigated the neural correlates of auditory vitality forms while participants listened to action verbs in three different conditions: human voice pronouncing the verbs in a rude and gentle way, robot voice pronouncing the same verbs without vitality forms, and a scrambled version of the same verbs pronounced by human voice. In agreement with previous studies on vitality forms encoding, we found specific activation of the central part of insula during listening to human voice conveying specific vitality forms. In addition, when listening both to human and robot voices there was an activation of the posterior part of the left inferior frontal gyrus and of the parieto-premotor circuit typically described to be activated during observation and execution of arm actions. Finally, the superior temporal gyrus was activated bilaterally in all three conditions. We conclude that, the central part of insula is a key region for vitality forms processing allowing the understanding of the vitality forms regardless of the modality by which they are conveyed. Copyright © 2016. Published by Elsevier Ltd.
Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas
2009-01-01
Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1
NASA Technical Reports Server (NTRS)
Culbert, Christopher J. (Editor)
1993-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.
Advances in neuroprosthetic learning and control.
Carmena, Jose M
2013-01-01
Significant progress has occurred in the field of brain-machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action.
Neural dynamic optimization for control systems. I. Background.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Neural dynamic optimization for control systems.III. Applications.
Seong, C Y; Widrow, B
2001-01-01
For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.
Neural dynamic optimization for control systems.II. Theory.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Advances in Neuroprosthetic Learning and Control
Carmena, Jose M.
2013-01-01
Significant progress has occurred in the field of brain–machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action. PMID:23700383
A spatiotemporal structure: common to subatomic systems, biological processes, and economic cycles
NASA Astrophysics Data System (ADS)
Naitoh, Ken
2012-03-01
A theoretical model derived based on a quasi-stability concept applied to momentum conservation (Naitoh, JJIAM, 2001, Artificial Life Robotics, 2008, 2010) has revealed the spatial structure of various systems. This model explains the reason why particles such as biological cells, nitrogenous bases, and liquid droplets have bimodal size ratios of about 2:3 and 1:1. This paper shows that the same theory holds true for several levels of parcels from baryons to stars in the cosmos: specifically, at the levels of nuclear force, van der Waals force, surface tension, and the force of gravity. A higher order of analysis clarifies other asymmetric ratios related to the halo structure seen in atoms and amino acids. We will also show that our minimum hypercycle theory for explaining the morphogenetic cycle (Naitoh, Artificial Life Robotics, 2008) reveals other temporal cycles such as those of economic systems and the circadian clock as well as the fundamental neural network pattern (topological pattern). Finally, a universal equation describing the spatiotemporal structure of several systems will be derived, which also leads to a general concept of quasi-stability.
NASA Technical Reports Server (NTRS)
Padgett, Mary L. (Editor)
1993-01-01
The present conference discusses such neural networks (NN) related topics as their current development status, NN architectures, NN learning rules, NN optimization methods, NN temporal models, NN control methods, NN pattern recognition systems and applications, biological and biomedical applications of NNs, VLSI design techniques for NNs, NN systems simulation, fuzzy logic, and genetic algorithms. Attention is given to missileborne integrated NNs, adaptive-mixture NNs, implementable learning rules, an NN simulator for travelling salesman problem solutions, similarity-based forecasting, NN control of hypersonic aircraft takeoff, NN control of the Space Shuttle Arm, an adaptive NN robot manipulator controller, a synthetic approach to digital filtering, NNs for speech analysis, adaptive spline networks, an anticipatory fuzzy logic controller, and encoding operations for fuzzy associative memories.
The GEOS-5 Neural Network Retrieval for AOD
NASA Astrophysics Data System (ADS)
Castellanos, P.; da Silva, A. M., Jr.
2017-12-01
One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.
Multisensory visual servoing by a neural network.
Wei, G Q; Hirzinger, G
1999-01-01
Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method.
Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction
Li, Zhencai; Wang, Yang; Liu, Zhen
2016-01-01
The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703
The GEOS-5 Neural Network Retrieval (NNR) for AOD
NASA Technical Reports Server (NTRS)
Castellanos, Patricia; Da Silva, Arlindo
2017-01-01
One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.
Morse, Anthony F; Cangelosi, Angelo
2017-02-01
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.
Tacchino, Giulia; Gandolla, Marta; Coelli, Stefania; Barbieri, Riccardo; Pedrocchi, Alessandra; Bianchi, Anna M
2017-06-01
Two key ingredients of a successful neuro-rehabilitative intervention have been identified as intensive and repetitive training and subject's active participation, which can be coupled in an active robot-assisted training. To exploit these two elements, we recorded electroencephalography, electromyography and kinematics signals from nine healthy subjects performing a 2×2 factorial design protocol, with subject's volitional intention and robotic glove assistance as factors. We quantitatively evaluated primary sensorimotor, premotor and supplementary motor areas activation during movement execution by computing event-related desynchronization (ERD) patterns associated to mu and beta rhythms. ERD patterns showed a similar behavior for all investigated regions: statistically significant ERDs began earlier in conditions requiring subject's volitional contribution; ERDs were prolonged towards the end of movement in conditions in which the robotic assistance was present. Our study suggests that the combination between subject volitional contribution and movement assistance provided by the robotic device (i.e., active robot-assisted modality) is able to provide early brain activation (i.e., earlier ERD) associated with stronger proprioceptive feedback (i.e., longer ERD). This finding might be particularly important for neurological patients, where movement cannot be completed autonomously and passive/active robot-assisted modalities are the only possibilities of execution.
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.
Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B
2016-01-01
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
Solving Navigational Uncertainty Using Grid Cells on Robots
Milford, Michael J.; Wiles, Janet; Wyeth, Gordon F.
2010-01-01
To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments. PMID:21085643
Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints.
Chen, Ziting; Li, Zhijun; Chen, C L Philip
2017-06-01
An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By ensuring the boundedness of the BLF of the closed-loop system, it is demonstrated that the output tracking is achieved with all states remaining in the constraint sets and the general assumption on nonsingularity of unknown control coefficient matrices has been eliminated. The constructed adaptive neural control has been rigorously proved that it can guarantee the semiglobally uniformly ultimate boundedness of all signals in the closed-loop system. Finally, the simulation studies on a 2-DOF robotic manipulator system indicate that the designed adaptive control is effective.
Heuristic control of the Utah/MIT dextrous robot hand
NASA Technical Reports Server (NTRS)
Bass, Andrew H., Jr.
1987-01-01
Basic hand grips and sensor interactions that a dextrous robot hand will need as part of the operation of an EVA Retriever are analyzed. What is to be done with a dextrous robot hand is examined along with how such a complex machine might be controlled. It was assumed throughout that an anthropomorphic robot hand should perform tasks just as a human would; i.e., the most efficient approach to developing control strategies for the hand would be to model actual hand actions and do the same tasks in the same ways. Therefore, basic hand grips that human hands perform, as well as hand grip action were analyzed. It was also important to examine what is termed sensor fusion. This is the integration of various disparate sensor feedback paths. These feedback paths can be spatially and temporally separated, as well as, of different sensor types. Neural networks are seen as a means of integrating these varied sensor inputs and types. Basic heuristics of hand actions and grips were developed. These heuristics offer promise of control dextrous robot hands in a more natural and efficient way.
Tutorial: Neural networks and their potential application in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhrig, R.E.
A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanjimore » characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise'' signals (e.g. electroencephalograms), modeling complex systems that cannot be modelled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants.« less
Nishimoto, Ryunosuke; Tani, Jun
2009-07-01
The current paper shows a neuro-robotics experiment on developmental learning of goal-directed actions. The robot was trained to predict visuo-proprioceptive flow of achieving a set of goal-directed behaviors through iterative tutor training processes. The learning was conducted by employing a dynamic neural network model which is characterized by their multiple time-scale dynamics. The experimental results showed that functional hierarchical structures emerge through stages of developments where behavior primitives are generated in earlier stages and their sequences of achieving goals appear in later stages. It was also observed that motor imagery is generated in earlier stages compared to actual behaviors. Our claim that manipulatable inner representation should emerge through the sensory-motor interactions is corresponded to Piaget's constructivist view.
The Top 10 Careers for the 1990s.
ERIC Educational Resources Information Center
Price, Paul
1988-01-01
Reports on a survey of experts from industry and academia which attempted to identify the top ten major career fields for engineers, including materials, biotechnology, automation and robotics, computer engineering, metals and mining, neural modeling, along with marine, aerospace, environmental and energy-related engineering. (TW)
Modelling brain emergent behaviours through coevolution of neural agents.
Maniadakis, Michail; Trahanias, Panos
2006-06-01
Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.
Study of the neural dynamics for understanding communication in terms of complex hetero systems.
Tsuda, Ichiro; Yamaguchi, Yoko; Hashimoto, Takashi; Okuda, Jiro; Kawasaki, Masahiro; Nagasaka, Yasuo
2015-01-01
The purpose of the research project was to establish a new research area named "neural information science for communication" by elucidating its neural mechanism. The research was performed in collaboration with applied mathematicians in complex-systems science and experimental researchers in neuroscience. The project included measurements of brain activity during communication with or without languages and analyses performed with the help of extended theories for dynamical systems and stochastic systems. The communication paradigm was extended to the interactions between human and human, human and animal, human and robot, human and materials, and even animal and animal. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Nycz, Christopher J; Gondokaryono, Radian; Carvalho, Paulo; Patel, Nirav; Wartenberg, Marek; Pilitsis, Julie G; Fischer, Gregory S
2018-01-01
The use of magnetic resonance imaging (MRI) for guiding robotic surgical devices has shown great potential for performing precisely targeted and controlled interventions. To fully realize these benefits, devices must work safely within the tight confines of the MRI bore without negatively impacting image quality. Here we expand on previous work exploring MRI guided robots for neural interventions by presenting the mechanical design and assessment of a device for positioning, orienting, and inserting an interstitial ultrasound-based ablation probe. From our previous work we have added a 2 degree of freedom (DOF) needle driver for use with the aforementioned probe, revised the mechanical design to improve strength and function, and performed an evaluation of the mechanism’s accuracy and effect on MR image quality. The result of this work is a 7-DOF MRI robot capable of positioning a needle tip and orienting it’s axis with accuracy of 1.37 ± 0.06mm and 0.79° ± 0.41°, inserting it along it’s axis with an accuracy of 0.06 ± 0.07mm, and rotating it about it’s axis to an accuracy of 0.77° ± 1.31°. This was accomplished with no significant reduction in SNR caused by the robot’s presence in the MRI bore, ≤ 10.3% reduction in SNR from running the robot’s motors during a scan, and no visible paramagnetic artifacts. PMID:29696097
Designing and Implementing Nervous System Simulations on LEGO Robots
Blustein, Daniel; Rosenthal, Nikolai; Ayers, Joseph
2013-01-01
We present a method to use the commercially available LEGO Mindstorms NXT robotics platform to test systems level neuroscience hypotheses. The first step of the method is to develop a nervous system simulation of specific reflexive behaviors of an appropriate model organism; here we use the American Lobster. Exteroceptive reflexes mediated by decussating (crossing) neural connections can explain an animal's taxis towards or away from a stimulus as described by Braitenberg and are particularly well suited for investigation using the NXT platform.1 The nervous system simulation is programmed using LabVIEW software on the LEGO Mindstorms platform. Once the nervous system is tuned properly, behavioral experiments are run on the robot and on the animal under identical environmental conditions. By controlling the sensory milieu experienced by the specimens, differences in behavioral outputs can be observed. These differences may point to specific deficiencies in the nervous system model and serve to inform the iteration of the model for the particular behavior under study. This method allows for the experimental manipulation of electronic nervous systems and serves as a way to explore neuroscience hypotheses specifically regarding the neurophysiological basis of simple innate reflexive behaviors. The LEGO Mindstorms NXT kit provides an affordable and efficient platform on which to test preliminary biomimetic robot control schemes. The approach is also well suited for the high school classroom to serve as the foundation for a hands-on inquiry-based biorobotics curriculum. PMID:23728477
Quantitative 3-D imaging topogrammetry for telemedicine applications
NASA Technical Reports Server (NTRS)
Altschuler, Bruce R.
1994-01-01
The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.
Giannaccini, Maria Elena; Xiang, Chaoqun; Atyabi, Adham; Theodoridis, Theo; Nefti-Meziani, Samia; Davis, Steve
2018-02-01
Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.
Xiang, Chaoqun; Atyabi, Adham; Theodoridis, Theo; Nefti-Meziani, Samia; Davis, Steve
2018-01-01
Abstract Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally. PMID:29412080
Anticipation by multi-modal association through an artificial mental imagery process
NASA Astrophysics Data System (ADS)
Gaona, Wilmer; Escobar, Esaú; Hermosillo, Jorge; Lara, Bruno
2015-01-01
Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.
Disturbances of Higher Level Neural Control-Robotic Applications in Stroke
2001-10-25
Interim Results on the Follow-up of 76 Patients and on Movement Performance Indices, In: Mounir Mokhtari (ed); Integration of Assistive Technology in the...the Nervous System’s Adaptive Mechanisms, In: Mounir Mokhtari (ed); Integration of Assistive Technology in the Information Age; IOS Press, Assistive
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
NASA Astrophysics Data System (ADS)
Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried
2017-09-01
Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.
Self-organisation and communication in groups of simulated and physical robots.
Trianni, Vito; Dorigo, Marco
2006-09-01
In social insects, both self-organisation and communication play a crucial role for the accomplishment of many tasks at a collective level. Communication is performed with different modalities, which can be roughly classified into three classes: indirect (stigmergic) communication, direct interactions and direct communication. The use of stigmergic communication is predominant in social insects (e.g. the pheromone trails in ants), where, however, direct interactions (e.g. antennation in ants) and direct communication (e.g. the waggle dance in honey bees) can also be observed. Taking inspiration from insect societies, we present an experimental study of self-organising behaviours for a group of robots, which exploit communication to coordinate their activities. In particular, the robots are placed in an arena presenting holes and open borders, which they should avoid while moving coordinately. Artificial evolution is responsible for the synthesis in a simulated environment of the robot's neural controllers, which are subsequently tested on physical robots. We study different communication strategies among the robots: no direct communication, handcrafted signalling and a completely evolved approach. We show that the latter is the most efficient, suggesting that artificial evolution can produce behaviours that are more adaptive than those obtained with conventional design methodologies. Moreover, we show that the evolved controllers produce a self-organising system that is robust enough to be tested on physical robots, notwithstanding the huge gap between simulation and reality.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050
NeuroMEMS: Neural Probe Microtechnologies
HajjHassan, Mohamad; Chodavarapu, Vamsy; Musallam, Sam
2008-01-01
Neural probe technologies have already had a significant positive effect on our understanding of the brain by revealing the functioning of networks of biological neurons. Probes are implanted in different areas of the brain to record and/or stimulate specific sites in the brain. Neural probes are currently used in many clinical settings for diagnosis of brain diseases such as seizers, epilepsy, migraine, Alzheimer's, and dementia. We find these devices assisting paralyzed patients by allowing them to operate computers or robots using their neural activity. In recent years, probe technologies were assisted by rapid advancements in microfabrication and microelectronic technologies and thus are enabling highly functional and robust neural probes which are opening new and exciting avenues in neural sciences and brain machine interfaces. With a wide variety of probes that have been designed, fabricated, and tested to date, this review aims to provide an overview of the advances and recent progress in the microfabrication techniques of neural probes. In addition, we aim to highlight the challenges faced in developing and implementing ultra-long multi-site recording probes that are needed to monitor neural activity from deeper regions in the brain. Finally, we review techniques that can improve the biocompatibility of the neural probes to minimize the immune response and encourage neural growth around the electrodes for long term implantation studies. PMID:27873894
Assistive-as-Needed Strategy for Upper-Limb Robotic Systems: An Initial Survey
NASA Astrophysics Data System (ADS)
Khairuddin, I. M.; Sidek, S. N.; Yusof, H. Md; Baarath, K.; Majeed, A. P. P. A.
2017-11-01
Stroke is amongst the leading causes of deprivation of one’s ability in carrying out activities of daily living. It has been reported from literature that, the functional recovery of stroke patients are rather poor, unless frequent rehabilitative therapy is assumed on the affected limb. Recent trends of rehabilitation therapy have also shifted towards allowing more participation of the patient in the therapy session rather than simple passive treatments as it has been demonstrated to be non-trivial in promoting neural plasticity to expedite motor recovery process. Therefore, the employment of rehabilitation robotics is seen as a means of mitigating the limitations of conventional rehabilitation therapy. It enables unique methods for promoting patient engagement by providing patients assistance only as needed basis. This paper attempts on reviewing assist-as-needed control strategy applied on upper-limb robotic rehabilitation devices.
Development of a neuromorphic control system for a lightweight humanoid robot
NASA Astrophysics Data System (ADS)
Folgheraiter, Michele; Keldibek, Amina; Aubakir, Bauyrzhan; Salakchinov, Shyngys; Gini, Giuseppina; Mauro Franchi, Alessio; Bana, Matteo
2017-03-01
A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.
Biologically inspired computation and learning in Sensorimotor Systems
NASA Astrophysics Data System (ADS)
Lee, Daniel D.; Seung, H. S.
2001-11-01
Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.
Tani, Jun; Nishimoto, Ryunosuke; Paine, Rainer W
2008-05-01
The current paper examines how compositional structures can self-organize in given neuro-dynamical systems when robot agents are forced to learn multiple goal-directed behaviors simultaneously. Firstly, we propose a basic model accounting for the roles of parietal-premotor interactions for representing skills for goal-directed behaviors. The basic model had been implemented in a set of robotics experiments employing different neural network architectures. The comparative reviews among those experimental results address the issues of local vs distributed representations in representing behavior and the effectiveness of level structures associated with different sensory-motor articulation mechanisms. It is concluded that the compositional structures can be acquired "organically" by achieving generalization in learning and by capturing the contextual nature of skilled behaviors under specific conditions. Furthermore, the paper discusses possible feedback for empirical neuroscience studies in the future.
Chen, Gang; Song, Yongduan; Guan, Yanfeng
2018-03-01
This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.
Dissipative rendering and neural network control system design
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.
1995-01-01
Model-based control system designs are limited by the accuracy of the models of the plant, plant uncertainty, and exogenous signals. Although better models can be obtained with system identification, the models and control designs still have limitations. One approach to reduce the dependency on particular models is to design a set of compensators that will guarantee robust stability to a set of plants. Optimization over the compensator parameters can then be used to get the desired performance. Conservativeness of this approach can be reduced by integrating fundamental properties of the plant models. This is the approach of dissipative control design. Dissipative control designs are based on several variations of the Passivity Theorem, which have been proven for nonlinear/linear and continuous-time/discrete-time systems. These theorems depend not on a specific model of a plant, but on its general dissipative properties. Dissipative control design has found wide applicability in flexible space structures and robotic systems that can be configured to be dissipative. Currently, there is ongoing research to improve the performance of dissipative control designs. For aircraft systems that are not dissipative active control may be used to make them dissipative and then a dissipative control design technique can be used. It is also possible that rendering a system dissipative and dissipative control design may be combined into one step. Furthermore, the transformation of a non-dissipative system to dissipative can be done robustly. One sequential design procedure for finite dimensional linear time-invariant systems has been developed. For nonlinear plants that cannot be controlled adequately with a single linear controller, model-based techniques have additional problems. Nonlinear system identification is still a research topic. Lacking analytical models for model-based design, artificial neural network algorithms have recently received considerable attention. Using their universal approximation property, neural networks have been introduced into nonlinear control designs in several ways. Unfortunately, little work has appeared that analyzes neural network control systems and establishes margins for stability and performance. One approach for this analysis is to set up neural network control systems in the framework presented above. For example, one neural network could be used to render a system to be dissipative, a second strictly dissipative neural network controller could be used to guarantee robust stability.
1990-12-01
030(M aau fr e~ re u’. ~oil(eIOE, form a::o n lit Send c f"ent lt ar nq this Ourde" "tii tor ay otther a .e n Of p, amid". to W4Vsntinlln...etadnuaeters ief’ice. 0 i 0reor Iformat.a;tio n ax; d 1 21 ;eQo Q offait of IA4naqe-m.t and Sudget. P01osoer t m edltoru Prole (07044 ,l81.’Nairil m O C NMI. I...I Application of Neural Networks to Robotics I Ziaudin Ahmnad John Selizuky Allm Gun Dmeel University, Depwunent of Electrical miCapue Engineeing
Aliper, Alexander; Plis, Sergey; Artemov, Artem; Ulloa, Alvaro; Mamoshina, Polina; Zhavoronkov, Alex
2016-07-05
Deep learning is rapidly advancing many areas of science and technology with multiple success stories in image, text, voice and video recognition, robotics, and autonomous driving. In this paper we demonstrate how deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles. We used the perturbation samples of 678 drugs across A549, MCF-7, and PC-3 cell lines from the LINCS Project and linked those to 12 therapeutic use categories derived from MeSH. To train the DNN, we utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled data set of samples perturbed with different concentrations of the drug for 6 and 24 hours. In both pathway and gene level classification, DNN achieved high classification accuracy and convincingly outperformed the support vector machine (SVM) model on every multiclass classification problem, however, models based on pathway level data performed significantly better. For the first time we demonstrate a deep learning neural net trained on transcriptomic data to recognize pharmacological properties of multiple drugs across different biological systems and conditions. We also propose using deep neural net confusion matrices for drug repositioning. This work is a proof of principle for applying deep learning to drug discovery and development.
Aliper, Alexander; Plis, Sergey; Artemov, Artem; Ulloa, Alvaro; Mamoshina, Polina; Zhavoronkov, Alex
2016-01-01
Deep learning is rapidly advancing many areas of science and technology with multiple success stories in image, text, voice and video recognition, robotics and autonomous driving. In this paper we demonstrate how deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles. We used the perturbation samples of 678 drugs across A549, MCF‐7 and PC‐3 cell lines from the LINCS project and linked those to 12 therapeutic use categories derived from MeSH. To train the DNN, we utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled dataset of samples perturbed with different concentrations of the drug for 6 and 24 hours. In both gene and pathway level classification, DNN convincingly outperformed support vector machine (SVM) model on every multiclass classification problem, however, models based on a pathway level classification perform better. For the first time we demonstrate a deep learning neural net trained on transcriptomic data to recognize pharmacological properties of multiple drugs across different biological systems and conditions. We also propose using deep neural net confusion matrices for drug repositioning. This work is a proof of principle for applying deep learning to drug discovery and development. PMID:27200455
Casalino, Laura; Magnani, Dario; De Falco, Sandro; Filosa, Stefania; Minchiotti, Gabriella; Patriarca, Eduardo J; De Cesare, Dario
2012-03-01
The use of Embryonic Stem Cells (ESCs) holds considerable promise both for drug discovery programs and the treatment of degenerative disorders in regenerative medicine approaches. Nevertheless, the successful use of ESCs is still limited by the lack of efficient control of ESC self-renewal and differentiation capabilities. In this context, the possibility to modulate ESC biological properties and to obtain homogenous populations of correctly specified cells will help developing physiologically relevant screens, designed for the identification of stem cell modulators. Here, we developed a high throughput screening-suitable ESC neural differentiation assay by exploiting the Cell(maker) robotic platform and demonstrated that neural progenies can be generated from ESCs in complete automation, with high standards of accuracy and reliability. Moreover, we performed a pilot screening providing proof of concept that this assay allows the identification of regulators of ESC neural differentiation in full automation.
Unsupervised texture image segmentation by improved neural network ART2
NASA Technical Reports Server (NTRS)
Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco
1994-01-01
We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.
An artificial neural network model for periodic trajectory generation
NASA Astrophysics Data System (ADS)
Shankar, S.; Gander, R. E.; Wood, H. C.
A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.
NASA Astrophysics Data System (ADS)
Waldmann, I. P.
2016-04-01
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang
2018-03-01
Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
Neuro-cognitive mechanisms of decision making in joint action: a human-robot interaction study.
Bicho, Estela; Erlhagen, Wolfram; Louro, Luis; e Silva, Eliana Costa
2011-10-01
In this paper we present a model for action preparation and decision making in cooperative tasks that is inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. It implements the coordination of actions and goals among the partners as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of others' motor behavior. The control architecture is formalized by a system of coupled dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode task-relevant information about action means, task goals and context in the form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic model of joint action is evaluated in a task in which a robot and a human jointly construct a toy object. We show that the highly context sensitive mapping from action observation onto appropriate complementary actions allows coping with dynamically changing joint action situations. Copyright © 2010 Elsevier B.V. All rights reserved.
Social interaction enhances motor resonance for observed human actions.
Hogeveen, Jeremy; Obhi, Sukhvinder S
2012-04-25
Understanding the neural basis of social behavior has become an important goal for cognitive neuroscience and a key aim is to link neural processes observed in the laboratory to more naturalistic social behaviors in real-world contexts. Although it is accepted that mirror mechanisms contribute to the occurrence of motor resonance (MR) and are common to action execution, observation, and imitation, questions remain about mirror (and MR) involvement in real social behavior and in processing nonhuman actions. To determine whether social interaction primes the MR system, groups of participants engaged or did not engage in a social interaction before observing human or robotic actions. During observation, MR was assessed via motor-evoked potentials elicited with transcranial magnetic stimulation. Compared with participants who did not engage in a prior social interaction, participants who engaged in the social interaction showed a significant increase in MR for human actions. In contrast, social interaction did not increase MR for robot actions. Thus, naturalistic social interaction and laboratory action observation tasks appear to involve common MR mechanisms, and recent experience tunes the system to particular agent types.
Sergi, Fabrizio; Krebs, Hermano Igo; Groissier, Benjamin; Rykman, Avrielle; Guglielmelli, Eugenio; Volpe, Bruce T; Schaechter, Judith D
2011-01-01
We are investigating the neural correlates of motor recovery promoted by robot-mediated therapy in chronic stroke. This pilot study asked whether efficacy of robot-aided motor rehabilitation in chronic stroke could be predicted by a change in functional connectivity within the sensorimotor network in response to a bout of motor rehabilitation. To address this question, two stroke patients participated in a functional connectivity MRI study pre and post a 12-week robot-aided motor rehabilitation program. Functional connectivity was evaluated during three consecutive scans before the rehabilitation program: resting-state; point-to-point reaching movements executed by the paretic upper extremity (UE) using a newly developed MRI-compatible sensorized passive manipulandum; resting-state. A single resting-state scan was conducted after the rehabilitation program. Before the program, UE movement reduced functional connectivity between the ipsilesional and contralesional primary motor cortex. Reduced interhemispheric functional connectivity persisted during the second resting-state scan relative to the first and during the resting-state scan after the rehabilitation program. Greater reduction in interhemispheric functional connectivity during the resting-state was associated with greater gains in UE motor function induced by the 12-week robotic therapy program. These findings suggest that greater reduction in interhemispheric functional connectivity in response to a bout of motor rehabilitation may predict greater efficacy of the full rehabilitation program.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Multiphasic On/Off Pheromone Signalling in Moths as Neural Correlates of a Search Strategy
Martinez, Dominique; Chaffiol, Antoine; Voges, Nicole; Gu, Yuqiao; Anton, Sylvia; Rospars, Jean-Pierre; Lucas, Philippe
2013-01-01
Insects and robots searching for odour sources in turbulent plumes face the same problem: the random nature of mixing causes fluctuations and intermittency in perception. Pheromone-tracking male moths appear to deal with discontinuous flows of information by surging upwind, upon sensing a pheromone patch, and casting crosswind, upon losing the plume. Using a combination of neurophysiological recordings, computational modelling and experiments with a cyborg, we propose a neuronal mechanism that promotes a behavioural switch between surge and casting. We show how multiphasic On/Off pheromone-sensitive neurons may guide action selection based on signalling presence or loss of the pheromone. A Hodgkin-Huxley-type neuron model with a small-conductance calcium-activated potassium (SK) channel reproduces physiological On/Off responses. Using this model as a command neuron and the antennae of tethered moths as pheromone sensors, we demonstrate the efficiency of multiphasic patterning in driving a robotic searcher toward the source. Taken together, our results suggest that multiphasic On/Off responses may mediate olfactory navigation and that SK channels may account for these responses. PMID:23613816
Multiphasic on/off pheromone signalling in moths as neural correlates of a search strategy.
Martinez, Dominique; Chaffiol, Antoine; Voges, Nicole; Gu, Yuqiao; Anton, Sylvia; Rospars, Jean-Pierre; Lucas, Philippe
2013-01-01
Insects and robots searching for odour sources in turbulent plumes face the same problem: the random nature of mixing causes fluctuations and intermittency in perception. Pheromone-tracking male moths appear to deal with discontinuous flows of information by surging upwind, upon sensing a pheromone patch, and casting crosswind, upon losing the plume. Using a combination of neurophysiological recordings, computational modelling and experiments with a cyborg, we propose a neuronal mechanism that promotes a behavioural switch between surge and casting. We show how multiphasic On/Off pheromone-sensitive neurons may guide action selection based on signalling presence or loss of the pheromone. A Hodgkin-Huxley-type neuron model with a small-conductance calcium-activated potassium (SK) channel reproduces physiological On/Off responses. Using this model as a command neuron and the antennae of tethered moths as pheromone sensors, we demonstrate the efficiency of multiphasic patterning in driving a robotic searcher toward the source. Taken together, our results suggest that multiphasic On/Off responses may mediate olfactory navigation and that SK channels may account for these responses.
Improved Autoassociative Neural Networks
NASA Technical Reports Server (NTRS)
Hand, Charles
2003-01-01
Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.
An Active System for Visually-Guided Reaching in 3D across Binocular Fixations
2014-01-01
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295
Large-scale deep learning for robotically gathered imagery for science
NASA Astrophysics Data System (ADS)
Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.
2016-12-01
With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.
NASA Astrophysics Data System (ADS)
Hajj-Hassan, Mohamad; Gonzalez, Timothy; Ghafer-Zadeh, Ebrahim; Chodavarapu, Vamsy; Musallam, Sam; Andrews, Mark
2009-02-01
Neural microelectrodes are an important component of neural prosthetic systems which assist paralyzed patients by allowing them to operate computers or robots using their neural activity. These microelectrodes are also used in clinical settings to localize the locus of seizure initiation in epilepsy or to stimulate sub-cortical structures in patients with Parkinson's disease. In neural prosthetic systems, implanted microelectrodes record the electrical potential generated by specific thoughts and relay the signals to algorithms trained to interpret these thoughts. In this paper, we describe novel elongated multi-site neural electrodes that can record electrical signals and specific neural biomarkers and that can reach depths greater than 8mm in the sulcus of non-human primates (monkeys). We hypothesize that additional signals recorded by the multimodal probes will increase the information yield when compared to standard probes that record just electropotentials. We describe integration of optical biochemical sensors with neural microelectrodes. The sensors are made using sol-gel derived xerogel thin films that encapsulate specific biomarker responsive luminophores in their nanostructured pores. The desired neural biomarkers are O2, pH, K+, and Na+ ions. As a prototype, we demonstrate direct-write patterning to create oxygen-responsive xerogel waveguide structures on the neural microelectrodes. The recording of neural biomarkers along with electrical activity could help the development of intelligent and more userfriendly neural prosthesis/brain machine interfaces as well as aid in providing answers to complex brain diseases and disorders.
Embedded Streaming Deep Neural Networks Accelerator With Applications.
Dundar, Aysegul; Jin, Jonghoon; Martini, Berin; Culurciello, Eugenio
2017-07-01
Deep convolutional neural networks (DCNNs) have become a very powerful tool in visual perception. DCNNs have applications in autonomous robots, security systems, mobile phones, and automobiles, where high throughput of the feedforward evaluation phase and power efficiency are important. Because of this increased usage, many field-programmable gate array (FPGA)-based accelerators have been proposed. In this paper, we present an optimized streaming method for DCNNs' hardware accelerator on an embedded platform. The streaming method acts as a compiler, transforming a high-level representation of DCNNs into operation codes to execute applications in a hardware accelerator. The proposed method utilizes maximum computational resources available based on a novel-scheduled routing topology that combines data reuse and data concatenation. It is tested with a hardware accelerator implemented on the Xilinx Kintex-7 XC7K325T FPGA. The system fully explores weight-level and node-level parallelizations of DCNNs and achieves a peak performance of 247 G-ops while consuming less than 4 W of power. We test our system with applications on object classification and object detection in real-world scenarios. Our results indicate high-performance efficiency, outperforming all other presented platforms while running these applications.
Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung
2018-05-24
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.
An Integrated Gait and Balance Analysis System to Define Human Locomotor Control
2016-04-29
common in the “real-world”. Furthermore, BCI controllers need some sort of direct link into neural signals and this requires invasive surgery and...L. J., Simon, A. M., Young, A. J., Lipschutz, R. D., Finucane, S. B., Smith, D. G., & Kuiken, T. A. (2013). Robotic leg control with EMG decoding in
Model-based Bayesian signal extraction algorithm for peripheral nerves
NASA Astrophysics Data System (ADS)
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.
Wang, Ying; Lin, Xudong; Chen, Xi; Chen, Xian; Xu, Zhen; Zhang, Wenchong; Liao, Qinghai; Duan, Xin; Wang, Xin; Liu, Ming; Wang, Feng; He, Jufang; Shi, Peng
2017-10-01
Many nanomaterials can be used as sensors or transducers in biomedical research and they form the essential components of transformative novel biotechnologies. In this study, we present an all-optical method for tetherless remote control of neural activity using fully implantable micro-devices based on upconversion technology. Upconversion nanoparticles (UCNPs) were used as transducers to convert near-infrared (NIR) energy to visible light in order to stimulate neurons expressing different opsin proteins. In our setup, UCNPs were packaged in a glass micro-optrode to form an implantable device with superb long-term biocompatibility. We showed that remotely applied NIR illumination is able to reliably trigger spiking activity in rat brains. In combination with a robotic laser projection system, the upconversion-based tetherless neural stimulation technique was implemented to modulate brain activity in various regions, including the striatum, ventral tegmental area, and visual cortex. Using this system, we were able to achieve behavioral conditioning in freely moving animals. Notably, our microscale device was at least one order of magnitude smaller in size (∼100 μm in diameter) and two orders of magnitude lighter in weight (less than 1 mg) than existing wireless optogenetic devices based on light-emitting diodes. This feature allows simultaneous implantation of multiple UCNP-optrodes to achieve modulation of brain function to control complex animal behavior. We believe that this technology not only represents a novel practical application of upconversion nanomaterials, but also opens up new possibilities for remote control of neural activity in the brains of behaving animals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ratiometric Decoding of Pheromones for a Biomimetic Infochemical Communication System.
Wei, Guangfen; Thomas, Sanju; Cole, Marina; Rácz, Zoltán; Gardner, Julian W
2017-10-30
Biosynthetic infochemical communication is an emerging scientific field employing molecular compounds for information transmission, labelling, and biochemical interfacing; having potential application in diverse areas ranging from pest management to group coordination of swarming robots. Our communication system comprises a chemoemitter module that encodes information by producing volatile pheromone components and a chemoreceiver module that decodes the transmitted ratiometric information via polymer-coated piezoelectric Surface Acoustic Wave Resonator (SAWR) sensors. The inspiration for such a system is based on the pheromone-based communication between insects. Ten features are extracted from the SAWR sensor response and analysed using multi-variate classification techniques, i.e., Linear Discriminant Analysis (LDA), Probabilistic Neural Network (PNN), and Multilayer Perception Neural Network (MLPNN) methods, and an optimal feature subset is identified. A combination of steady state and transient features of the sensor signals showed superior performances with LDA and MLPNN. Although MLPNN gave excellent results reaching 100% recognition rate at 400 s, over all time stations PNN gave the best performance based on an expanded data-set with adjacent neighbours. In this case, 100% of the pheromone mixtures were successfully identified just 200 s after they were first injected into the wind tunnel. We believe that this approach can be used for future chemical communication employing simple mixtures of airborne molecules.
Ratiometric Decoding of Pheromones for a Biomimetic Infochemical Communication System
Wei, Guangfen; Thomas, Sanju; Cole, Marina; Rácz, Zoltán
2017-01-01
Biosynthetic infochemical communication is an emerging scientific field employing molecular compounds for information transmission, labelling, and biochemical interfacing; having potential application in diverse areas ranging from pest management to group coordination of swarming robots. Our communication system comprises a chemoemitter module that encodes information by producing volatile pheromone components and a chemoreceiver module that decodes the transmitted ratiometric information via polymer-coated piezoelectric Surface Acoustic Wave Resonator (SAWR) sensors. The inspiration for such a system is based on the pheromone-based communication between insects. Ten features are extracted from the SAWR sensor response and analysed using multi-variate classification techniques, i.e., Linear Discriminant Analysis (LDA), Probabilistic Neural Network (PNN), and Multilayer Perception Neural Network (MLPNN) methods, and an optimal feature subset is identified. A combination of steady state and transient features of the sensor signals showed superior performances with LDA and MLPNN. Although MLPNN gave excellent results reaching 100% recognition rate at 400 s, over all time stations PNN gave the best performance based on an expanded data-set with adjacent neighbours. In this case, 100% of the pheromone mixtures were successfully identified just 200 s after they were first injected into the wind tunnel. We believe that this approach can be used for future chemical communication employing simple mixtures of airborne molecules. PMID:29084158
NASA Astrophysics Data System (ADS)
Sadeghi-Goughari, M.; Mojra, A.; Sadeghi, S.
2016-02-01
Intraoperative Thermal Imaging (ITI) is a new minimally invasive diagnosis technique that can potentially locate margins of brain tumor in order to achieve maximum tumor resection with least morbidity. This study introduces a new approach to ITI based on artificial tactile sensing (ATS) technology in conjunction with artificial neural networks (ANN) and feasibility and applicability of this method in diagnosis and localization of brain tumors is investigated. In order to analyze validity and reliability of the proposed method, two simulations were performed. (i) An in vitro experimental setup was designed and fabricated using a resistance heater embedded in agar tissue phantom in order to simulate heat generation by a tumor in the brain tissue; and (ii) A case report patient with parafalcine meningioma was presented to simulate ITI in the neurosurgical procedure. In the case report, both brain and tumor geometries were constructed from MRI data and tumor temperature and depth of location were estimated. For experimental tests, a novel assisted surgery robot was developed to palpate the tissue phantom surface to measure temperature variations and ANN was trained to estimate the simulated tumor’s power and depth. Results affirm that ITI based ATS is a non-invasive method which can be useful to detect, localize and characterize brain tumors.
Design of a monitor and simulation terminal (master) for space station telerobotics and telescience
NASA Technical Reports Server (NTRS)
Lopez, L.; Konkel, C.; Harmon, P.; King, S.
1989-01-01
Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.
Jiang, Ping; Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun
2016-01-01
The development of a physiologically plausible computational model of a neural controller that can realize a human-like biped stance is important for a large number of potential applications, such as assisting device development and designing robotic control systems. In this paper, we develop a computational model of a neural controller that can maintain a musculoskeletal model in a standing position, while incorporating a 120-ms neurological time delay. Unlike previous studies that have used an inverted pendulum model, a musculoskeletal model with seven joints and 70 muscular-tendon actuators is adopted to represent the human anatomy. Our proposed neural controller is composed of both feed-forward and feedback controls. The feed-forward control corresponds to the constant activation input necessary for the musculoskeletal model to maintain a standing posture. This compensates for gravity and regulates stiffness. The developed neural controller model can replicate two salient features of the human biped stance: (1) physiologically plausible muscle activations for quiet standing; and (2) selection of a low active stiffness for low energy consumption. PMID:27655271
Navigation and Robotics in Spinal Surgery: Where Are We Now?
Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M
2017-03-01
Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.
Biological neural networks as model systems for designing future parallel processing computers
NASA Technical Reports Server (NTRS)
Ross, Muriel D.
1991-01-01
One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.
Learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.
2003-10-01
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
A New Powered Lower Limb Prosthesis Control Framework Based on Adaptive Dynamic Programming.
Wen, Yue; Si, Jennie; Gao, Xiang; Huang, Stephanie; Huang, He Helen
2017-09-01
This brief presents a novel application of adaptive dynamic programming (ADP) for optimal adaptive control of powered lower limb prostheses, a type of wearable robots to assist the motor function of the limb amputees. Current control of these robotic devices typically relies on finite state impedance control (FS-IC), which lacks adaptability to the user's physical condition. As a result, joint impedance settings are often customized manually and heuristically in clinics, which greatly hinder the wide use of these advanced medical devices. This simulation study aimed at demonstrating the feasibility of ADP for automatic tuning of the twelve knee joint impedance parameters during a complete gait cycle to achieve balanced walking. Given that the accurate models of human walking dynamics are difficult to obtain, the model-free ADP control algorithms were considered. First, direct heuristic dynamic programming (dHDP) was applied to the control problem, and its performance was evaluated on OpenSim, an often-used dynamic walking simulator. For the comparison purposes, we selected another established ADP algorithm, the neural fitted Q with continuous action (NFQCA). In both cases, the ADP controllers learned to control the right knee joint and achieved balanced walking, but dHDP outperformed NFQCA in this application during a 200 gait cycle-based testing.
Powered robotic exoskeletons in post-stroke rehabilitation of gait: a scoping review.
Louie, Dennis R; Eng, Janice J
2016-06-08
Powered robotic exoskeletons are a potential intervention for gait rehabilitation in stroke to enable repetitive walking practice to maximize neural recovery. As this is a relatively new technology for stroke, a scoping review can help guide current research and propose recommendations for advancing the research development. The aim of this scoping review was to map the current literature surrounding the use of robotic exoskeletons for gait rehabilitation in adults post-stroke. Five databases (Pubmed, OVID MEDLINE, CINAHL, Embase, Cochrane Central Register of Clinical Trials) were searched for articles from inception to October 2015. Reference lists of included articles were reviewed to identify additional studies. Articles were included if they utilized a robotic exoskeleton as a gait training intervention for adult stroke survivors and reported walking outcome measures. Of 441 records identified, 11 studies, all published within the last five years, involving 216 participants met the inclusion criteria. The study designs ranged from pre-post clinical studies (n = 7) to controlled trials (n = 4); five of the studies utilized a robotic exoskeleton device unilaterally, while six used a bilateral design. Participants ranged from sub-acute (<7 weeks) to chronic (>6 months) stroke. Training periods ranged from single-session to 8-week interventions. Main walking outcome measures were gait speed, Timed Up and Go, 6-min Walk Test, and the Functional Ambulation Category. Meaningful improvement with exoskeleton-based gait training was more apparent in sub-acute stroke compared to chronic stroke. Two of the four controlled trials showed no greater improvement in any walking outcomes compared to a control group in chronic stroke. In conclusion, clinical trials demonstrate that powered robotic exoskeletons can be used safely as a gait training intervention for stroke. Preliminary findings suggest that exoskeletal gait training is equivalent to traditional therapy for chronic stroke patients, while sub-acute patients may experience added benefit from exoskeletal gait training. Efforts should be invested in designing rigorous, appropriately powered controlled trials before powered exoskeletons can be translated into a clinical tool for gait rehabilitation post-stroke.
Surface EMG signals based motion intent recognition using multi-layer ELM
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Qi, Lin; Wang, Xiao
2017-11-01
The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
Influence of Domain Shift Factors on Deep Segmentation of the Drivable Path of AN Autonomous Vehicle
NASA Astrophysics Data System (ADS)
Bormans, R. P. A.; Lindenbergh, R. C.; Karimi Nejadasl, F.
2018-05-01
One of the biggest challenges for an autonomous vehicle (and hence the WEpod) is to see the world as humans would see it. This understanding is the base for a successful and reliable future of autonomous vehicles. Real-world data and semantic segmentation generally are used to achieve full understanding of its surroundings. However, deploying a pretrained segmentation network to a new, previously unseen domain will not attain similar performance as it would on the domain where it is trained on due to the differences between the domains. Although research is done concerning the mitigation of this domain shift, the factors that cause these differences are not yet fully explored. We filled this gap with the investigation of several factors. A base network was created by a two-step finetuning procedure on a convolutional neural network (SegNet) which is pretrained on CityScapes (a dataset for semantic segmentation). The first tuning step is based on RobotCar (road scenery dataset recorded in Oxford, UK) while afterwards this network is fine-tuned for a second time but now on the KITTI (road scenery dataset recorded in Germany) dataset. With this base, experiments are used to obtain the importance of factors such as horizon line, colour and training order for a successful domain adaptation. In this case the domain adaptation is from the KITTI and RobotCar domain to the WEpod domain. For evaluation, groundtruth labels are created in a weakly-supervised setting. Negative influence was obtained for training on greyscale images instead of RGB images. This resulted in drops of IoU values up to 23.9 % for WEpod test images. The training order is a main contributor for domain adaptation with an increase in IoU of 4.7 %. This shows that the target domain (WEpod) is more closely related to RobotCar than to KITTI.
ERIC Educational Resources Information Center
Faria, Carlos; Vale, Carolina; Machado, Toni; Erlhagen, Wolfram; Rito, Manuel; Monteiro, Sérgio; Bicho, Estela
2016-01-01
Robotics has been playing an important role in modern surgery, especially in procedures that require extreme precision, such as neurosurgery. This paper addresses the challenge of teaching robotics to undergraduate engineering students, through an experiential learning project of robotics fundamentals based on a case study of robot-assisted…
Blank, Amy A; French, James A; Pehlivan, Ali Utku; O'Malley, Marcia K
2014-09-01
Stroke is one of the leading causes of long-term disability today; therefore, many research efforts are focused on designing maximally effective and efficient treatment methods. In particular, robotic stroke rehabilitation has received significant attention for upper-limb therapy due to its ability to provide high-intensity repetitive movement therapy with less effort than would be required for traditional methods. Recent research has focused on increasing patient engagement in therapy, which has been shown to be important for inducing neural plasticity to facilitate recovery. Robotic therapy devices enable unique methods for promoting patient engagement by providing assistance only as needed and by detecting patient movement intent to drive to the device. Use of these methods has demonstrated improvements in functional outcomes, but careful comparisons between methods remain to be done. Future work should include controlled clinical trials and comparisons of effectiveness of different methods for patients with different abilities and needs in order to inform future development of patient-specific therapeutic protocols.
Artificial neural network EMG classifier for functional hand grasp movements prediction.
Gandolla, Marta; Ferrante, Simona; Ferrigno, Giancarlo; Baldassini, Davide; Molteni, Franco; Guanziroli, Eleonora; Cotti Cottini, Michele; Seneci, Carlo; Pedrocchi, Alessandra
2017-12-01
Objective To design and implement an electromyography (EMG)-based controller for a hand robotic assistive device, which is able to classify the user's motion intention before the effective kinematic movement execution. Methods Multiple degrees-of-freedom hand grasp movements (i.e. pinching, grasp an object, grasping) were predicted by means of surface EMG signals, recorded from 10 bipolar EMG electrodes arranged in a circular configuration around the forearm 2-3 cm from the elbow. Two cascaded artificial neural networks were then exploited to detect the patient's motion intention from the EMG signal window starting from the electrical activity onset to movement onset (i.e. electromechanical delay). Results The proposed approach was tested on eight healthy control subjects (4 females; age range 25-26 years) and it demonstrated a mean ± SD testing performance of 76% ± 14% for correctly predicting healthy users' motion intention. Two post-stroke patients tested the controller and obtained 79% and 100% of correctly classified movements under testing conditions. Conclusion A task-selection controller was developed to estimate the intended movement from the EMG measured during the electromechanical delay.
NASA Astrophysics Data System (ADS)
Schwartz, Andrew B.
2016-07-01
The target paper by Santello et al. [1] uses the observation that hand shape during grasping can be described by a small set of basic postures, or ;synergies,; to describe the possible neural basis of motor control during this complex behavior. In the literature, the term ;synergy; has been used with a number of different meanings and is still loosely defined, making it difficult to derive concrete analogs of corresponding neural structure. Here, I will define ;synergy; broadly, as a set of parameters bound together by a pattern of correlation. With this definition, it can be argued that behavioral synergies are just one facet of the correlational structuring used by the brain to generate behavior. As pointed out in the target article, the structure found in synergies is driven by the physical constraints of our bodies and our surroundings, combined with the behavioral control imparted by our nervous system. This control itself is based on correlational structure which is likely to be a fundamental property of brain function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as themore » “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.« less
Hand-in-hand advances in biomedical engineering and sensorimotor restoration.
Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio
2015-05-15
Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.
Older adults' acceptance of a robot for partner dance-based exercise.
Chen, Tiffany L; Bhattacharjee, Tapomayukh; Beer, Jenay M; Ting, Lena H; Hackney, Madeleine E; Rogers, Wendy A; Kemp, Charles C
2017-01-01
Partner dance has been shown to be beneficial for the health of older adults. Robots could potentially facilitate healthy aging by engaging older adults in partner dance-based exercise. However, partner dance involves physical contact between the dancers, and older adults would need to be accepting of partner dancing with a robot. Using methods from the technology acceptance literature, we conducted a study with 16 healthy older adults to investigate their acceptance of robots for partner dance-based exercise. Participants successfully led a human-scale wheeled robot with arms (i.e., a mobile manipulator) in a simple, which we refer to as the Partnered Stepping Task (PST). Participants led the robot by maintaining physical contact and applying forces to the robot's end effectors. According to questionnaires, participants were generally accepting of the robot for partner dance-based exercise, tending to perceive it as useful, easy to use, and enjoyable. Participants tended to perceive the robot as easier to use after performing the PST with it. Through a qualitative data analysis of structured interview data, we also identified facilitators and barriers to acceptance of robots for partner dance-based exercise. Throughout the study, our robot used admittance control to successfully dance with older adults, demonstrating the feasibility of this method. Overall, our results suggest that robots could successfully engage older adults in partner dance-based exercise.
Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.
Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning
2018-03-16
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.
Designing speech-based interfaces for telepresence robots for people with disabilities.
Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David
2013-06-01
People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.
Arena, Paolo; Calí, Marco; Patané, Luca; Portera, Agnese; Strauss, Roland
2016-09-01
Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.
Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico
2012-07-24
The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.
Cognitive Mapping Based on Conjunctive Representations of Space and Movement
Zeng, Taiping; Si, Bailu
2017-01-01
It is a challenge to build robust simultaneous localization and mapping (SLAM) system in dynamical large-scale environments. Inspired by recent findings in the entorhinal–hippocampal neuronal circuits, we propose a cognitive mapping model that includes continuous attractor networks of head-direction cells and conjunctive grid cells to integrate velocity information by conjunctive encodings of space and movement. Visual inputs from the local view cells in the model provide feedback cues to correct drifting errors of the attractors caused by the noisy velocity inputs. We demonstrate the mapping performance of the proposed cognitive mapping model on an open-source dataset of 66 km car journey in a 3 km × 1.6 km urban area. Experimental results show that the proposed model is robust in building a coherent semi-metric topological map of the entire urban area using a monocular camera, even though the image inputs contain various changes caused by different light conditions and terrains. The results in this study could inspire both neuroscience and robotic research to better understand the neural computational mechanisms of spatial cognition and to build robust robotic navigation systems in large-scale environments. PMID:29213234
Sub-processes of motor learning revealed by a robotic manipulandum for rodents.
Lambercy, O; Schubring-Giese, M; Vigaru, B; Gassert, R; Luft, A R; Hosp, J A
2015-02-01
Rodent models are widely used to investigate neural changes in response to motor learning. Usually, the behavioral readout of motor learning tasks used for this purpose is restricted to a binary measure of performance (i.e. "successful" movement vs. "failure"). Thus, the assignability of research in rodents to concepts gained in human research - implying diverse internal models that constitute motor learning - is still limited. To solve this problem, we recently introduced a three-degree-of-freedom robotic platform designed for rats (the ETH-Pattus) that combines an accurate behavioral readout (in the form of kinematics) with the possibility to invasively assess learning related changes within the brain (e.g. by performing immunohistochemistry or electrophysiology in acute slice preparations). Here, we validate this platform as a tool to study motor learning by establishing two forelimb-reaching paradigms that differ in degree of skill. Both conditions can be precisely differentiated in terms of their temporal pattern and performance levels. Based on behavioral data, we hypothesize the presence of several sub-processes contributing to motor learning. These share close similarities with concepts gained in humans or primates. Copyright © 2014 Elsevier B.V. All rights reserved.
The 1991 Goddard Conference on Space Applications of Artificial Intelligence
NASA Technical Reports Server (NTRS)
Rash, James L. (Editor)
1991-01-01
The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications.
Saleh, Soha; Fluet, Gerard; Qiu, Qinyin; Merians, Alma; Adamovich, Sergei V.; Tunik, Eugene
2017-01-01
Several approaches to rehabilitation of the hand following a stroke have emerged over the last two decades. These treatments, including repetitive task practice (RTP), robotically assisted rehabilitation and virtual rehabilitation activities, produce improvements in hand function but have yet to reinstate function to pre-stroke levels—which likely depends on developing the therapies to impact cortical reorganization in a manner that favors or supports recovery. Understanding cortical reorganization that underlies the above interventions is therefore critical to inform how such therapies can be utilized and improved and is the focus of the current investigation. Specifically, we compare neural reorganization elicited in stroke patients participating in two interventions: a hybrid of robot-assisted virtual reality (RAVR) rehabilitation training and a program of RTP training. Ten chronic stroke subjects participated in eight 3-h sessions of RAVR therapy. Another group of nine stroke subjects participated in eight sessions of matched RTP therapy. Functional magnetic resonance imaging (fMRI) data were acquired during paretic hand movement, before and after training. We compared the difference between groups and sessions (before and after training) in terms of BOLD intensity, laterality index of activation in sensorimotor areas, and the effective connectivity between ipsilesional motor cortex (iMC), contralesional motor cortex, ipsilesional primary somatosensory cortex (iS1), ipsilesional ventral premotor area (iPMv), and ipsilesional supplementary motor area. Last, we analyzed the relationship between changes in fMRI data and functional improvement measured by the Jebsen Taylor Hand Function Test (JTHFT), in an attempt to identify how neurophysiological changes are related to motor improvement. Subjects in both groups demonstrated motor recovery after training, but fMRI data revealed RAVR-specific changes in neural reorganization patterns. First, BOLD signal in multiple regions of interest was reduced and re-lateralized to the ipsilesional side. Second, these changes correlated with improvement in JTHFT scores. Our findings suggest that RAVR training may lead to different neurophysiological changes when compared with traditional therapy. This effect may be attributed to the influence that augmented visual and haptic feedback during RAVR training exerts over higher-order somatosensory and visuomotor areas. PMID:28928708
Robotic assessment of neuromuscular characteristics using musculoskeletal models: A pilot study.
Jayaneththi, V R; Viloria, J; Wiedemann, L G; Jarrett, C; McDaid, A J
2017-07-01
Non-invasive neuromuscular characterization aims to provide greater insight into the effectiveness of existing and emerging rehabilitation therapies by quantifying neuromuscular characteristics relating to force production, muscle viscoelasticity and voluntary neural activation. In this paper, we propose a novel approach to evaluate neuromuscular characteristics, such as muscle fiber stiffness and viscosity, by combining robotic and HD-sEMG measurements with computational musculoskeletal modeling. This pilot study investigates the efficacy of this approach on a healthy population and provides new insight on potential limitations of conventional musculoskeletal models for this application. Subject-specific neuromuscular characteristics of the biceps and triceps brachii were evaluated using robot-measured kinetics, kinematics and EMG activity as inputs to a musculoskeletal model. Repeatability experiments in five participants revealed large variability within each subjects evaluated characteristics, with almost all experiencing variation greater than 50% of full scale when repeating the same task. The use of robotics and HD-sEMG, in conjunction with musculoskeletal modeling, to quantify neuromuscular characteristics has been explored. Despite the ability to predict joint kinematics with relatively high accuracy, parameter characterization was inconsistent i.e. many parameter combinations gave rise to minimal kinematic error. The proposed technique is a novel approach for in vivo neuromuscular characterization and is a step towards the realization of objective in-home robot-assisted rehabilitation. Importantly, the results have confirmed the technical (robot and HD-sEMG) feasibility while highlighting the need to develop new musculoskeletal models and optimization techniques capable of achieving consistent results across a range of dynamic tasks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Merians, Alma S; Fluet, Gerard G; Qiu, Qinyin; Saleh, Soha; Lafond, Ian; Davidow, Amy; Adamovich, Sergei V
2011-05-16
Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training.
2011-01-01
Background Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Methods Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. Results The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Conclusions Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training. PMID:21575185
Holanda, Ledycnarf J; Silva, Patrícia M M; Amorim, Thiago C; Lacerda, Matheus O; Simão, Camila R; Morya, Edgard
2017-12-04
Spinal cord injury (SCI) is characterized by a total or partial deficit of sensory and motor pathways. Impairments of this injury compromise muscle recruitment and motor planning, thus reducing functional capacity. SCI patients commonly present psychological, intestinal, urinary, osteomioarticular, tegumentary, cardiorespiratory and neural alterations that aggravate in chronic phase. One of the neurorehabilitation goals is the restoration of these abilities by favoring improvement in the quality of life and functional independence. Current literature highlights several benefits of robotic gait therapies in SCI individuals. The purpose of this study was to compare the robotic gait devices, and systematize the scientific evidences of these devices as a tool for rehabilitation of SCI individuals. A systematic review was carried out in which relevant articles were identified by searching the following databases: Cochrane Library, PubMed, PEDro and Capes Periodic. Two authors selected the articles which used a robotic device for rehabilitation of spinal cord injury. Databases search found 2941 articles, 39 articles were included due to meet the inclusion criteria. The robotic devices presented distinct features, with increasing application in the last years. Studies have shown promising results regarding the reduction of pain perception and spasticity level; alteration of the proprioceptive capacity, sensitivity to temperature, vibration, pressure, reflex behavior, electrical activity at muscular and cortical level, classification of the injury level; increase in walking speed, step length and distance traveled; improvements in sitting posture, intestinal, cardiorespiratory, metabolic, tegmental and psychological functions. This systematic review shows a significant progress encompassing robotic devices as an innovative and effective therapy for the rehabilitation of individuals with SCI.
Older adults’ acceptance of a robot for partner dance-based exercise
Chen, Tiffany L.; Beer, Jenay M.; Ting, Lena H.; Hackney, Madeleine E.; Rogers, Wendy A.; Kemp, Charles C.
2017-01-01
Partner dance has been shown to be beneficial for the health of older adults. Robots could potentially facilitate healthy aging by engaging older adults in partner dance-based exercise. However, partner dance involves physical contact between the dancers, and older adults would need to be accepting of partner dancing with a robot. Using methods from the technology acceptance literature, we conducted a study with 16 healthy older adults to investigate their acceptance of robots for partner dance-based exercise. Participants successfully led a human-scale wheeled robot with arms (i.e., a mobile manipulator) in a simple, which we refer to as the Partnered Stepping Task (PST). Participants led the robot by maintaining physical contact and applying forces to the robot’s end effectors. According to questionnaires, participants were generally accepting of the robot for partner dance-based exercise, tending to perceive it as useful, easy to use, and enjoyable. Participants tended to perceive the robot as easier to use after performing the PST with it. Through a qualitative data analysis of structured interview data, we also identified facilitators and barriers to acceptance of robots for partner dance-based exercise. Throughout the study, our robot used admittance control to successfully dance with older adults, demonstrating the feasibility of this method. Overall, our results suggest that robots could successfully engage older adults in partner dance-based exercise. PMID:29045408