Sample records for biped humanoid robots

  1. Foot Placement Modification for a Biped Humanoid Robot with Narrow Feet

    PubMed Central

    Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion. PMID:24592154

  2. Foot placement modification for a biped humanoid robot with narrow feet.

    PubMed

    Hashimoto, Kenji; Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion.

  3. Walk-Startup of a Two-Legged Walking Mechanism

    NASA Astrophysics Data System (ADS)

    Babković, Kalman; Nagy, László; Krklješ, Damir; Borovac, Branislav

    There is a growing interest towards humanoid robots. One of their most important characteristic is the two-legged motion - walk. Starting and stopping of humanoid robots introduce substantial delays. In this paper, the goal is to explore the possibility of using a short unbalanced state of the biped robot to quickly gain speed and achieve the steady state velocity during a period shorter than half of the single support phase. The proposed method is verified by simulation. Maintainig a steady state, balanced gait is not considered in this paper.

  4. A Control Framework for Anthropomorphic Biped Walking Based on Stabilizing Feedforward Trajectories.

    PubMed

    Rezazadeh, Siavash; Gregg, Robert D

    2016-10-01

    Although dynamic walking methods have had notable successes in control of bipedal robots in the recent years, still most of the humanoid robots rely on quasi-static Zero Moment Point controllers. This work is an attempt to design a highly stable controller for dynamic walking of a human-like model which can be used both for control of humanoid robots and prosthetic legs. The method is based on using time-based trajectories that can induce a highly stable limit cycle to the bipedal robot. The time-based nature of the controller motivates its use to entrain a model of an amputee walking, which can potentially lead to a better coordination of the interaction between the prosthesis and the human. The simulations demonstrate the stability of the controller and its robustness against external perturbations.

  5. Development of a neuromorphic control system for a lightweight humanoid robot

    NASA Astrophysics Data System (ADS)

    Folgheraiter, Michele; Keldibek, Amina; Aubakir, Bauyrzhan; Salakchinov, Shyngys; Gini, Giuseppina; Mauro Franchi, Alessio; Bana, Matteo

    2017-03-01

    A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.

  6. Posture Control-Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses.

    PubMed

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.

  7. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    PubMed Central

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428

  8. Optimization-based Dynamic Human Walking Prediction

    DTIC Science & Technology

    2007-01-01

    9(1), 1997, p 10-17. 3. Chevallereau, C. and Aousin, Y. Optimal reference trajectories for walking and running of a biped robot. Robotica , v 19...28, 2001, Arlington, Virginia. 13. Mu, XP. and Wu, Q. Synthesis of a complete sagittal gait cycle for a five-link biped robot. Robotica , v 21...gait cycles of a biped robot. Robotica , v 21(2), 2003, p 199-210. 16. Sardain, P. and Bessonnet, G. Forces acting on a biped robot. Center of

  9. Comparison of Human and Humanoid Robot Control of Upright Stance

    PubMed Central

    Peterka, Robert J.

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to ~1 Hz) dynamic characteristics of human stance control. These subsystems are 1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions, and 2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions were humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different. PMID:19665564

  10. Comparison of human and humanoid robot control of upright stance.

    PubMed

    Peterka, Robert J

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.

  11. Single-step collision-free trajectory planning of biped climbing robots in spatial trusses.

    PubMed

    Zhu, Haifei; Guan, Yisheng; Chen, Shengjun; Su, Manjia; Zhang, Hong

    For a biped climbing robot with dual grippers to climb poles, trusses or trees, feasible collision-free climbing motion is inevitable and essential. In this paper, we utilize the sampling-based algorithm, Bi-RRT, to plan single-step collision-free motion for biped climbing robots in spatial trusses. To deal with the orientation limit of a 5-DoF biped climbing robot, a new state representation along with corresponding operations including sampling, metric calculation and interpolation is presented. A simple but effective model of a biped climbing robot in trusses is proposed, through which the motion planning of one climbing cycle is transformed to that of a manipulator. In addition, the pre- and post-processes are introduced to expedite the convergence of the Bi-RRT algorithm and to ensure the safe motion of the climbing robot near poles as well. The piecewise linear paths are smoothed by utilizing cubic B-spline curve fitting. The effectiveness and efficiency of the presented Bi-RRT algorithm for climbing motion planning are verified by simulations.

  12. SVR versus neural-fuzzy network controllers for the sagittal balance of a biped robot.

    PubMed

    Ferreira, João P; Crisóstomo, Manuel M; Coimbra, A Paulo

    2009-12-01

    The real-time balance control of an eight-link biped robot using a zero moment point (ZMP) dynamic model is difficult due to the processing time of the corresponding equations. To overcome this limitation, two alternative intelligent computing control techniques were compared: one based on support vector regression (SVR) and another based on a first-order Takagi-Sugeno-Kang (TSK)-type neural-fuzzy (NF) network. Both methods use the ZMP error and its variation as inputs and the output is the correction of the robot's torso necessary for its sagittal balance. The SVR and the NF were trained based on simulation data and their performance was verified with a real biped robot. Two performance indexes are proposed to evaluate and compare the online performance of the two control methods. The ZMP is calculated by reading four force sensors placed under each robot's foot. The gait implemented in this biped is similar to a human gait that was acquired and adapted to the robot's size. Some experiments are presented and the results show that the implemented gait combined either with the SVR controller or with the TSK NF network controller can be used to control this biped robot. The SVR and the NF controllers exhibit similar stability, but the SVR controller runs about 50 times faster.

  13. The motion control of a statically stable biped robot on an uneven floor.

    PubMed

    Shih, C L; Chiou, C J

    1998-01-01

    This work studies the motion control of a statically stable biped robot having seven degrees of freedom. Statically stable walking of the biped robot is realized by maintaining the center-of-gravity inside the convex region of the supporting foot and/or feet during both single-support and double-support phases. The main points of this work are framing the stability in an easy and correct way, the design of a bipedal statically stable walker, and walking on sloping surfaces and stairs.

  14. Multi-layer robot skin with embedded sensors and muscles

    NASA Astrophysics Data System (ADS)

    Tomar, Ankit; Tadesse, Yonas

    2016-04-01

    Soft artificial skin with embedded sensors and actuators is proposed for a crosscutting study of cognitive science on a facial expressive humanoid platform. This paper focuses on artificial muscles suitable for humanoid robots and prosthetic devices for safe human-robot interactions. Novel composite artificial skin consisting of sensors and twisted polymer actuators is proposed. The artificial skin is conformable to intricate geometries and includes protective layers, sensor layers, and actuation layers. Fluidic channels are included in the elastomeric skin to inject fluids in order to control actuator response time. The skin can be used to develop facially expressive humanoid robots or other soft robots. The humanoid robot can be used by computer scientists and other behavioral science personnel to test various algorithms, and to understand and develop more perfect humanoid robots with facial expression capability. The small-scale humanoid robots can also assist ongoing therapeutic treatment research with autistic children. The multilayer skin can be used for many soft robots enabling them to detect both temperature and pressure, while actuating the entire structure.

  15. Motion synthesis and force distribution analysis for a biped robot.

    PubMed

    Trojnacki, Maciej T; Zielińska, Teresa

    2011-01-01

    In this paper, the method of generating biped robot motion using recorded human gait is presented. The recorded data were modified taking into account the velocity available for robot drives. Data includes only selected joint angles, therefore the missing values were obtained considering the dynamic postural stability of the robot, which means obtaining an adequate motion trajectory of the so-called Zero Moment Point (ZMT). Also, the method of determining the ground reaction forces' distribution during the biped robot's dynamic stable walk is described. The method was developed by the authors. Following the description of equations characterizing the dynamics of robot's motion, the values of the components of ground reaction forces were symbolically determined as well as the coordinates of the points of robot's feet contact with the ground. The theoretical considerations have been supported by computer simulation and animation of the robot's motion. This was done using Matlab/Simulink package and Simulink 3D Animation Toolbox, and it has proved the proposed method.

  16. Pareto Design of State Feedback Tracking Control of a Biped Robot via Multiobjective PSO in Comparison with Sigma Method and Genetic Algorithms: Modified NSGAII and MATLAB's Toolbox

    PubMed Central

    Mahmoodabadi, M. J.; Taherkhorsandi, M.; Bagheri, A.

    2014-01-01

    An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot. PMID:24616619

  17. DARPA Robotics Challenge (DRC) Using Human-Machine Teamwork to Perform Disaster Response with a Humanoid Robot

    DTIC Science & Technology

    2017-02-01

    DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT FLORIDA INSTITUTE FOR HUMAN AND...AND SUBTITLE DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT 5a. CONTRACT NUMBER...Human and Machine Cognition (IHMC) from 2012-2016 through three phases of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge

  18. Vocal emotion of humanoid robots: a study from brain mechanism.

    PubMed

    Wang, Youhui; Hu, Xiaohua; Dai, Weihui; Zhou, Jie; Kuo, Taitzong

    2014-01-01

    Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.

  19. Robotic Literacy Learning Companions: Exploring Student Engagement with a Humanoid Robot in an Afterschool Literacy Program

    ERIC Educational Resources Information Center

    Levchak, Sofia

    2016-01-01

    This study was an investigation of the use of a NAO humanoid robot as an effective tool for engaging readers in an afterschool program as well as to find if increasing engagement using a humanoid robot would affect students' reading comprehension when compared to traditional forms of instruction. The targeted population of this study was…

  20. Operation analysis of a Chebyshev-Pantograph leg mechanism for a single DOF biped robot

    NASA Astrophysics Data System (ADS)

    Liang, Conghui; Ceccarelli, Marco; Takeda, Yukio

    2012-12-01

    In this paper, operation analysis of a Chebyshev-Pantograph leg mechanism is presented for a single degree of freedom (DOF) biped robot. The proposed leg mechanism is composed of a Chebyshev four-bar linkage and a pantograph mechanism. In contrast to general fully actuated anthropomorphic leg mechanisms, the proposed leg mechanism has peculiar features like compactness, low-cost, and easy-operation. Kinematic equations of the proposed leg mechanism are formulated for a computer oriented simulation. Simulation results show the operation performance of the proposed leg mechanism with suitable characteristics. A parametric study has been carried out to evaluate the operation performance as function of design parameters. A prototype of a single DOF biped robot equipped with two proposed leg mechanisms has been built at LARM (Laboratory of Robotics and Mechatronics). Experimental test shows practical feasible walking ability of the prototype, as well as drawbacks are discussed for the mechanical design.

  1. Vocal Emotion of Humanoid Robots: A Study from Brain Mechanism

    PubMed Central

    Wang, Youhui; Hu, Xiaohua; Zhou, Jie; Kuo, Taitzong

    2014-01-01

    Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings. PMID:24587712

  2. Design and motion control of bioinspired humanoid robot head from servo motors toward artificial muscles

    NASA Astrophysics Data System (ADS)

    Almubarak, Yara; Tadesse, Yonas

    2017-04-01

    The potential applications of humanoid robots in social environments, motivates researchers to design, and control biomimetic humanoid robots. Generally, people are more interested to interact with robots that have similar attributes and movements to humans. The head is one of most important part of any social robot. Currently, most humanoid heads use electrical motors, pneumatic actuators, and shape memory alloy (SMA) actuators for actuation. Electrical and pneumatic actuators take most of the space and would cause unsmooth motions. SMAs are expensive to use in humanoids. Recently, in many robotic projects, Twisted and Coiled Polymer (TCP) artificial muscles are used as linear actuators which take up little space compared to the motors. In this paper, we will demonstrate the designing process and motion control of a robotic head with TCP muscles. Servo motors and artificial muscles are used for actuating the head motion, which have been controlled by a cost efficient ARM Cortex-M7 based development board. A complete comparison between the two actuators is presented.

  3. The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy

    NASA Astrophysics Data System (ADS)

    Burns, Alec; Tadesse, Yonas

    2014-03-01

    In this paper, a humanoid robot is presented for ultimate use in the rehabilitation of children with mental disorders, such as autism. Creating affordable and efficient humanoids could assist the therapy in psychiatric disability by offering multimodal communication between the humanoid and humans. Yet, the humanoid development needs a seamless integration of artificial muscles, sensors, controllers and structures. We have designed a human-like robot that has 15 DOF, 580 mm tall and 925 mm arm span using a rapid prototyping system. The robot has a human-like appearance and movement. Flexible sensors around the arm and hands for safe human-robot interactions, and a two-wheel mobile platform for maneuverability are incorporated in the design. The robot has facial features for illustrating human-friendly behavior. The mechanical design of the robot and the characterization of the flexible sensors are presented. Comprehensive study on the upper body design, mobile base, actuators selection, electronics, and performance evaluation are included in this paper.

  4. A Course in Simulation and Demonstration of Humanoid Robot Motion

    ERIC Educational Resources Information Center

    Liu, Hsin-Yu; Wang, Wen-June; Wang, Rong-Jyue

    2011-01-01

    An introductory course for humanoid robot motion realization for undergraduate and graduate students is presented in this study. The basic operations of AX-12 motors and the mechanics combination of a 16 degrees-of-freedom (DOF) humanoid robot are presented first. The main concepts of multilink systems, zero moment point (ZMP), and feedback…

  5. Teen Sized Humanoid Robot: Archie

    NASA Astrophysics Data System (ADS)

    Baltes, Jacky; Byagowi, Ahmad; Anderson, John; Kopacek, Peter

    This paper describes our first teen sized humanoid robot Archie. This robot has been developed in conjunction with Prof. Kopacek’s lab from the Technical University of Vienna. Archie uses brushless motors and harmonic gears with a novel approach to position encoding. Based on our previous experience with small humanoid robots, we developed software to create, store, and play back motions as well as control methods which automatically balance the robot using feedback from an internal measurement unit (IMU).

  6. Humanoids in Support of Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier

    2006-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.

  7. Reinforcement learning for a biped robot based on a CPG-actor-critic method.

    PubMed

    Nakamura, Yutaka; Mori, Takeshi; Sato, Masa-aki; Ishii, Shin

    2007-08-01

    Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes.

  8. Sports Training Support Method by Self-Coaching with Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Toyama, S.; Ikeda, F.; Yasaka, T.

    2016-09-01

    This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.

  9. Biomechanics of Step Initiation After Balance Recovery With Implications for Humanoid Robot Locomotion.

    PubMed

    Miller Buffinton, Christine; Buffinton, Elise M; Bieryla, Kathleen A; Pratt, Jerry E

    2016-03-01

    Balance-recovery stepping is often necessary for both a human and humanoid robot to avoid a fall by taking a single step or multiple steps after an external perturbation. The determination of where to step to come to a complete stop has been studied, but little is known about the strategy for initiation of forward motion from the static position following such a step. The goal of this study was to examine the human strategy for stepping by moving the back foot forward from a static, double-support position, comparing parameters from normal step length (SL) to those from increasing SLs to the point of step failure, to provide inspiration for a humanoid control strategy. Healthy young adults instrumented with joint reflective markers executed a prescribed-length step from rest while marker positions and ground reaction forces (GRFs) were measured. The participants were scaled to the Gait2354 model in opensim software to calculate body kinematic and joint kinetic parameters, with further post-processing in matlab. With increasing SL, participants reduced both static and push-off back-foot GRF. Body center of mass (CoM) lowered and moved forward, with additional lowering at the longer steps, and followed a path centered within the initial base of support (BoS). Step execution was successful if participants gained enough forward momentum at toe-off to move the instantaneous capture point (ICP) to within the BoS defined by the final position of both feet on the front force plate. All lower extremity joint torques increased with SL except ankle joint. Front knee work increased dramatically with SL, accompanied by decrease in back-ankle work. As SL increased, the human strategy changed, with participants shifting their CoM forward and downward before toe-off, thus gaining forward momentum, while using less propulsive work from the back ankle and engaging the front knee to straighten the body. The results have significance for human motion, suggesting the upper limit of the SL that can be completed with back-ankle push-off before additional knee flexion and torque is needed. For biped control, the results support stability based on capture-point dynamics and suggest strategy for center-of-mass trajectory and distribution of ground force reactions that can be compared with robot controllers for initiation of gait after recovery steps.

  10. Humanoids for lunar and planetary surface operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing

    2005-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.

  11. Complete low-cost implementation of a teleoperated control system for a humanoid robot.

    PubMed

    Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena

    2013-01-24

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.

  12. Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot

    PubMed Central

    Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena

    2013-01-01

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029

  13. Acquiring neural signals for developing a perception and cognition model

    NASA Astrophysics Data System (ADS)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  14. Research state-of-the-art of mobile robots in China

    NASA Astrophysics Data System (ADS)

    Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing

    1991-03-01

    Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.

  15. Humanoid robotics in health care: An exploration of children's and parents' emotional reactions.

    PubMed

    Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan

    2015-07-01

    A new non-pharmacological method of distraction was tested with 57 children during their annual flu vaccination. Given children's growing enthusiasm for technological devices, a humanoid robot was programmed to interact with them while a nurse administered the vaccination. Children smiled more often with the robot, as compared to the control condition, but they did not cry less. Parents indicated that their children held stronger memories for the robot than for the needle, wanted the robot in the future, and felt empowered to cope. We conclude that children and their parents respond positively to a humanoid robot at the bedside. © The Author(s) 2013.

  16. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    DTIC Science & Technology

    2006-01-01

    Brooks, R., Bryson, J., Marjanovic , M., Stein, L. A., & Wessler, M. (1996), Humanoid Soft- ware, Technical report, MIT Arti cial Intelli- gence Lab...8217, Journal of Biomechanics 19, 231{238. Marjanovic , M. (1995), Learning Functional Maps Between Sensorimotor Systems on a Humanoid Robot, Master’s thesis, MIT

  17. The Paradigm of Utilizing Robots in the Teaching Process: A Comparative Study

    ERIC Educational Resources Information Center

    Bacivarov, Ioan C.; Ilian, Virgil L. M.

    2012-01-01

    This paper discusses a comparative study of the effects of using a humanoid robot for introducing students to personal robotics. Even if a humanoid robot is one of the more complicated types of robots, comprehension was not an issue. The study highlighted the importance of using real hardware for teaching such complex subjects as opposed to…

  18. Incorporation of perception-based information in robot learning using fuzzy reinforcement learning agents

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo

    2002-04-01

    Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.

  19. Generate an Optimum Lightweight Legs Structure Design Based on Critical Posture in A-FLoW Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Luthfi, A.; Subhan, K. A.; Eko H, B.; Sanggar, D. R.; Pramadihanto, D.

    2018-04-01

    Lightweight construction and energy efficiency play an important role in humanoid robot development. The application of computer-aided engineering (CAE) in the development process is one of the possibilities to achieve the appropriate reduction of the weight. This paper describes a method to generate an optimum lightweight legs structure design based on critical posture during walking locomotion in A-FLoW Humanoid robot.The criticalposture can be obtained from the highest forces and moments in each joint of the robot body during walking locomotion. From the finite element analysis (FEA) result can be realized leg structure design of A-FLoW humanoid robot with a maximum displacement value of 0.05 mmand weight reduction about 0.598 Kg from the thigh structure and a maximum displacement value of 0,13 mmand weight reduction about 0.57 kg from the shin structure.

  20. SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan

    2015-11-24

    Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.

  1. SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots

    PubMed Central

    Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan

    2015-01-01

    Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051

  2. A new biarticular actuator design facilitates control of leg function in BioBiped3.

    PubMed

    Sharbafi, Maziar Ahmad; Rode, Christian; Kurowski, Stefan; Scholz, Dorian; Möckel, Rico; Radkhah, Katayon; Zhao, Guoping; Rashty, Aida Mohammadinejad; Stryk, Oskar von; Seyfarth, Andre

    2016-07-01

    Bioinspired legged locomotion comprises different aspects, such as (i) benefiting from reduced complexity control approaches as observed in humans/animals, (ii) combining embodiment with the controllers and (iii) reflecting neural control mechanisms. One of the most important lessons learned from nature is the significant role of compliance in simplifying control, enhancing energy efficiency and robustness against perturbations for legged locomotion. In this research, we investigate how body morphology in combination with actuator design may facilitate motor control of leg function. Inspired by the human leg muscular system, we show that biarticular muscles have a key role in balancing the upper body, joint coordination and swing leg control. Appropriate adjustment of biarticular spring rest length and stiffness can simplify the control and also reduce energy consumption. In order to test these findings, the BioBiped3 robot was developed as a new version of BioBiped series of biologically inspired, compliant musculoskeletal robots. In this robot, three-segmented legs actuated by mono- and biarticular series elastic actuators mimic the nine major human leg muscle groups. With the new biarticular actuators in BioBiped3, novel simplified control concepts for postural balance and for joint coordination in rebounding movements (drop jumps) were demonstrated and approved.

  3. Balancing Theory and Practical Work in a Humanoid Robotics Course

    ERIC Educational Resources Information Center

    Wolff, Krister; Wahde, Mattias

    2010-01-01

    In this paper, we summarize our experiences from teaching a course in humanoid robotics at Chalmers University of Technology in Goteborg, Sweden. We describe the robotic platform used in the course and we propose the use of a custom-built robot consisting of standard electronic and mechanical components. In our experience, by using standard…

  4. Social cognitive neuroscience and humanoid robotics.

    PubMed

    Chaminade, Thierry; Cheng, Gordon

    2009-01-01

    We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.

  5. Dynamic legged locomotion in robots and animals

    NASA Astrophysics Data System (ADS)

    Raibert, Marc; Playter, Robert; Ringrose, Robert; Bailey, Dave; Leeser, Karl

    1995-01-01

    This report documents our study of active legged systems that balance actively and move dynamically. The purpose of this research is to build a foundation of knowledge that can lead both to the construction of useful legged vehicles and to a better understanding of how animal locomotion works. In this report we provide an update on progress during the past year. Here are the topics covered in this report: (1) Is cockroach locomotion dynamic? To address this question we created three models of cockroaches, each abstracted at a different level. We provided each model with a control system and computer simulation. One set of results suggests that 'Groucho Running,' a type of dynamic walking, seems feasible at cockroach scale. (2) How do bipeds shift weight between the legs? We built a simple planar biped robot specifically to explore this question. It shifts its weight from one curved foot to the other, using a toe-off and toe-on strategy, in conjunction with dynamic tipping. (3) 3D biped gymnastics: The 3D biped robot has done front somersaults in the laboratory. The robot changes its leg length in flight to control rotation rate. This in turn provides a mechanism for controlling the landing attitude of the robot once airborne. (4) Passively stabilized layout somersault: We have found that the passive structure of a gymnast, the configuration of masses and compliances, can stabilize inherently unstable maneuvers. This means that body biomechanics could play a larger role in controlling behavior than is generally thought. We used a physical 'doll' model and computer simulation to illustrate the point. (5) Twisting: Some gymnastic maneuvers require twisting. We are studying how to couple the biomechanics of the system to its control to produce efficient, stable twisting maneuvers.

  6. The second me: Seeing the real body during humanoid robot embodiment produces an illusion of bi-location.

    PubMed

    Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane

    2016-11-01

    Whole-body embodiment studies have shown that synchronized multi-sensory cues can trick a healthy human mind to perceive self-location outside the bodily borders, producing an illusion that resembles an out-of-body experience (OBE). But can a healthy mind also perceive the sense of self in more than one body at the same time? To answer this question, we created a novel artificial reduplication of one's body using a humanoid robot embodiment system. We first enabled individuals to embody the humanoid robot by providing them with audio-visual feedback and control of the robot head movements and walk, and then explored the self-location and self-identification perceived by them when they observed themselves through the embodied robot. Our results reveal that, when individuals are exposed to the humanoid body reduplication, they experience an illusion that strongly resembles heautoscopy, suggesting that a healthy human mind is able to bi-locate in two different bodies simultaneously. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Humanoid assessing rehabilitative exercises.

    PubMed

    Simonov, M; Delconte, G

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "New Methodologies for Patients Rehabilitation". The article presents the approach in which the rehabilitative exercise prepared by healthcare professional is encoded as formal knowledge and used by humanoid robot to assist patients without involving other care actors. The main objective is the use of humanoids in rehabilitative care. An example is pulmonary rehabilitation in COPD patients. Another goal is the automated judgment functionality to determine how the rehabilitation exercise matches the pre-programmed correct sequence. We use the Aldebaran Robotics' NAO humanoid to set up artificial cognitive application. Pre-programmed NAO induces elderly patient to undertake humanoid-driven rehabilitation exercise, but needs to evaluate the human actions against the correct template. Patient is observed using NAO's eyes. We use the Microsoft Kinect SDK to extract motion path from the humanoid's recorded video. We compare human- and humanoid-operated process sequences by using the Dynamic Time Warping (DTW) and test the prototype. This artificial cognitive software showcases the use of DTW algorithm to enable humanoids to judge in near real-time about the correctness of rehabilitative exercises performed by patients following the robot's indications. One could enable better sustainable rehabilitative care services in remote residential settings by combining intelligent applications piloting humanoids with the DTW pattern matching algorithm applied at run time to compare humanoid- and human-operated process sequences. In turn, it will lower the need of human care.

  8. Can We Talk to Robots? Ten-Month-Old Infants Expected Interactive Humanoid Robots to Be Talked to by Persons

    ERIC Educational Resources Information Center

    Arita, A.; Hiraki, K.; Kanda, T.; Ishiguro, H.

    2005-01-01

    As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the…

  9. Influence of facial feedback during a cooperative human-robot task in schizophrenia.

    PubMed

    Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane

    2017-11-03

    Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.

  10. Artificial heart for humanoid robot

    NASA Astrophysics Data System (ADS)

    Potnuru, Akshay; Wu, Lianjun; Tadesse, Yonas

    2014-03-01

    A soft robotic device inspired by the pumping action of a biological heart is presented in this study. Developing artificial heart to a humanoid robot enables us to make a better biomedical device for ultimate use in humans. As technology continues to become more advanced, the methods in which we implement high performance and biomimetic artificial organs is getting nearer each day. In this paper, we present the design and development of a soft artificial heart that can be used in a humanoid robot and simulate the functions of a human heart using shape memory alloy technology. The robotic heart is designed to pump a blood-like fluid to parts of the robot such as the face to simulate someone blushing or when someone is angry by the use of elastomeric substrates and certain features for the transport of fluids.

  11. The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink

    NASA Astrophysics Data System (ADS)

    Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli

    A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.

  12. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  13. Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism

    ERIC Educational Resources Information Center

    Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin

    2007-01-01

    The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot…

  14. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center’s Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of “dull, dirty or dangerous” tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center’s newest development areas can provide crew with low mass exercise capability and also augment an astronaut’s strength while wearing a space suit.This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center’s Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  15. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center's Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of "dull, dirty or dangerous" tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center's newest development areas can provide crew with low mass exercise capability and also augment an astronaut's strength while wearing a space suit. This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center's Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  16. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  17. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  18. Pre-Schoolers' Interest and Caring Behaviour around a Humanoid Robot

    ERIC Educational Resources Information Center

    Ioannou, Andri; Andreou, Emily; Christofi, Maria

    2015-01-01

    This exploratory case study involved a humanoid robot, NAO, and four pre-schoolers. NAO was placed in an indoor playground together with other toys and appeared as a peer who played, talked, danced and said stories. Analysis of video recordings focused on children's behaviour around NAO and how the robot gained children's attention and…

  19. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task

    PubMed Central

    Rogers, Wendy A.

    2015-01-01

    Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936

  20. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task.

    PubMed

    Prakash, Akanksha; Rogers, Wendy A

    2015-04-01

    Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.

  1. LARM PKM solutions for torso design in humanoid robots

    NASA Astrophysics Data System (ADS)

    Ceccarelli, Marco

    2014-12-01

    Human-like torso features are essential in humanoid robots. In this paper problems for design and operation of solutions for a robotic torso are discussed by referring to experiences and designs that have been developed at Laboratory of Robotics and Mechatronics (LARM) in Cassino, Italy. A new solution is presented with conceptual views as waist-trunk structure that makes a proper partition of the performance for walking and arm operations as sustained by a torso.

  2. Robot-Mediated Interviews - How Effective Is a Humanoid Robot as a Tool for Interviewing Young Children?

    PubMed Central

    Wood, Luke Jai; Dautenhahn, Kerstin; Rainer, Austen; Robins, Ben; Lehmann, Hagen; Syrdal, Dag Sverre

    2013-01-01

    Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications. PMID:23533625

  3. "Robovie, You'll Have to Go into the Closet Now": Children's Social and Moral Relationships with a Humanoid Robot

    ERIC Educational Resources Information Center

    Kahn, Peter H., Jr.; Kanda, Takayuki; Ishiguro, Hiroshi; Freier, Nathan G.; Severson, Rachel L.; Gill, Brian T.; Ruckert, Jolina H.; Shen, Solace

    2012-01-01

    Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter…

  4. Natural Tasking of Robots Based on Human Interaction Cues

    DTIC Science & Technology

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  5. Grounding language in action and perception: From cognitive agents to humanoid robots

    NASA Astrophysics Data System (ADS)

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.

  6. From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot

    DTIC Science & Technology

    2003-06-01

    pages 961-968. Brooks, R. A., Breazeal, C., Marjanovic , M., and Scassellati, B. (1999). The Cog project: Building a humanoid robot. Lecture Notes in...investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38. Marjanovic , M. (1995). Learning functional maps between sensorimotor

  7. Autonomous learning in humanoid robotics through mental imagery.

    PubMed

    Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo

    2013-05-01

    In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    NASA Astrophysics Data System (ADS)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  9. Control of humanoid robot via motion-onset visual evoked potentials

    PubMed Central

    Li, Wei; Li, Mengfan; Zhao, Jing

    2015-01-01

    This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task. PMID:25620918

  10. Drift-Free Humanoid State Estimation fusing Kinematic, Inertial and LIDAR Sensing

    DTIC Science & Technology

    2014-08-01

    registration to this map and other objects in the robot’s vicinity while also contributing to direct low-level control of a Boston Dynamics Atlas robot ...requirements. I. INTRODUCTION Dynamic locomotion of legged robotic systems remains an open and challenging research problem whose solution will enable...humanoids to perform tasks and reach places inaccessible to wheeled or tracked robots . Several research institutions are developing walking and running

  11. Postural stability of biped robots and the foot-rotation indicator (FRI) point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goswami, A.

    1999-06-01

    The focus of this paper is the problem of foot rotation in biped robots during the single-support phase. Foot rotation is an indication of postural instability, which should be carefully treated in a dynamically stable walk and avoided altogether in a statically stable walk. The author introduces the foot-rotation indicator (FRI) point, which is a point on the foot/ground-contact surface where the net ground-reaction force would have to act to keep the foot stationary. To ensure no foot rotation, the FRI point must remain within the convex hull of the foot-support area. In contrast with the ground projection of themore » center of mass (GCoM), which is a static criterion, the FRI point incorporates robot dynamics. As opposed to the center of pressure (CoP) -- better known as the zero-moment point (ZMP) in the robotics literature -- which may not leave the support area, the FRI point may leave the area. In fact, the position of the FRI point outside the footprint indicates the direction of the impending rotation and the magnitude of rotational moment acting on the foot. Owing to these important properties, the FRI point helps not only to monitor the state of postural stability of a biped robot during the entire gait cycle, but indicates the severity of instability of the gait as well. In response to a recent need, the paper also resolves the misconceptions surrounding the CoP/ZMP equivalence.« less

  12. Supervising Remote Humanoids Across Intermediate Time Delay

    NASA Technical Reports Server (NTRS)

    Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Rabe, Kenneth; Allan, Mark

    2006-01-01

    The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented.

  13. Toward humanoid robots for operations in complex urban environments

    NASA Astrophysics Data System (ADS)

    Pratt, Jerry E.; Neuhaus, Peter; Johnson, Matthew; Carff, John; Krupp, Ben

    2010-04-01

    Many infantry operations in urban environments, such as building clearing, are extremely dangerous and difficult and often result in high casualty rates. Despite the fast pace of technological progress in many other areas, the tactics and technology deployed for many of these dangerous urban operation have not changed much in the last 50 years. While robots have been extremely useful for improvised explosive device (IED) detonation, under-vehicle inspection, surveillance, and cave exploration, there is still no fieldable robot that can operate effectively in cluttered streets and inside buildings. Developing a fieldable robot that can maneuver in complex urban environments is challenging due to narrow corridors, stairs, rubble, doors and cluttered doorways, and other obstacles. Typical wheeled and tracked robots have trouble getting through most of these obstacles. A bipedal humanoid is ideally shaped for many of these obstacles because its legs are long and skinny. Therefore it has the potential to step over large barriers, gaps, rocks, and steps, yet squeeze through narrow passageways, and through narrow doorways. By being able to walk with one foot directly in front of the other, humanoids also have the potential to walk over narrow "balance beam" style objects and can cross a narrow row of stepping stones. We describe some recent advances in humanoid robots, particularly recovery from disturbances, such as pushes and walking over rough terrain. Our disturbance recovery algorithms are based on the concept of Capture Points. An N-Step Capture Point is a point on the ground in which a legged robot can step to in order to stop in N steps. The N-Step Capture Region is the set of all N-Step Capture Points. In order to walk without falling, a legged robot must step somewhere in the intersection between an N-Step Capture Region and the available footholds on the ground. We present results of push recovery using Capture Points on our humanoid robot M2V2.

  14. Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    PubMed Central

    Lazzeri, Nicole; Mazzei, Daniele; Greco, Alberto; Rotesi, Annalisa; Lanatà, Antonio; De Rossi, Danilo Emilio

    2015-01-01

    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models. PMID:26075199

  15. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model.

    PubMed

    Dadashzadeh, Behnam; Esmaeili, Mohammad; Macnab, Chris

    2017-01-01

    This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB) model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP) model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles.

  16. Arbitrary Symmetric Running Gait Generation for an Underactuated Biped Model

    PubMed Central

    Esmaeili, Mohammad; Macnab, Chris

    2017-01-01

    This paper investigates generating symmetric trajectories for an underactuated biped during the stance phase of running. We use a point mass biped (PMB) model for gait analysis that consists of a prismatic force actuator on a massless leg. The significance of this model is its ability to generate more general and versatile running gaits than the spring-loaded inverted pendulum (SLIP) model, making it more suitable as a template for real robots. The algorithm plans the necessary leg actuator force to cause the robot center of mass to undergo arbitrary trajectories in stance with any arbitrary attack angle and velocity angle. The necessary actuator forces follow from the inverse kinematics and dynamics. Then these calculated forces become the control input to the dynamic model. We compare various center-of-mass trajectories, including a circular arc and polynomials of the degrees 2, 4 and 6. The cost of transport and maximum leg force are calculated for various attack angles and velocity angles. The results show that choosing the velocity angle as small as possible is beneficial, but the angle of attack has an optimum value. We also find a new result: there exist biped running gaits with double-hump ground reaction force profiles which result in less maximum leg force than single-hump profiles. PMID:28118401

  17. An Integrated Framework for Human-Robot Collaborative Manipulation.

    PubMed

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  18. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  19. Grounding language in action and perception: from cognitive agents to humanoid robots.

    PubMed

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135163 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  1. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135148 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  2. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135140 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  3. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135185 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  4. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135187 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  5. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135135 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  6. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135157 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  7. Combining gait optimization with passive system to increase the energy efficiency of a humanoid robot walking movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less

  8. Social humanoid robot SARA: development of the wrist mechanism

    NASA Astrophysics Data System (ADS)

    Penčić, M.; Rackov, M.; Čavić, M.; Kiss, I.; Cioată, V. G.

    2018-01-01

    This paper presents the development of a wrist mechanism for humanoid robots. The research was conducted within the project which develops social humanoid robot Sara - a mobile anthropomorphic platform for researching the social behaviour of robots. There are two basic ways for the realization of humanoid wrist. The first one is based on biologically inspired structures that have variable stiffness, and the second one on low backlash mechanisms that have high stiffness. Our solution is low backlash differential mechanism that requires small actuators. Based on the kinematic-dynamic requirements, a dynamic model of the robot wrist is formed. A dynamic simulation for several hand positions was performed and the driving torques of the wrist mechanism were determined. The realized wrist has 2 DOFs and enables movements in the direction of flexion/extension 115°, ulnar/radial deviation ±45° and the combination of these two movements. It consists of a differential mechanism with three spur bevel gears, two of which are driving and identical, while the last one is the driven gear to which the robot hand is attached. Power transmission and motion from the actuator to the input links of the differential mechanism is realized with two parallel placed identical gear mechanisms. The wrist mechanism has high carrying capacity and reliability, high efficiency, a compact design and low backlash that provides high positioning accuracy and repeatability of movements, which is essential for motion control.

  9. Humanoid Robotics: Real-Time Object Oriented Programming

    NASA Technical Reports Server (NTRS)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  10. Robotic system construction with mechatronic components inverted pendulum: humanoid robot

    NASA Astrophysics Data System (ADS)

    Sandru, Lucian Alexandru; Crainic, Marius Florin; Savu, Diana; Moldovan, Cristian; Dolga, Valer; Preitl, Stefan

    2017-03-01

    Mechatronics is a new methodology used to achieve an optimal design of an electromechanical product. This methodology is collection of practices, procedures and rules used by those who work in particular branch of knowledge or discipline. Education in mechatronics at the Polytechnic University Timisoara is organized on three levels: bachelor, master and PhD studies. These activities refer and to design the mechatronics systems. In this context the design, implementation and experimental study of a family of mechatronic demonstrator occupy an important place. In this paper, a variant for a mechatronic demonstrator based on the combination of the electrical and mechanical components is proposed. The demonstrator, named humanoid robot, is equivalent with an inverted pendulum. Is presented the analyze of components for associated functions of the humanoid robot. This type of development the mechatronic systems by the combination of hardware and software, offers the opportunity to build the optimal solutions.

  11. Adaptive neural control for dual-arm coordination of humanoid robot with unknown nonlinearities in output mechanism.

    PubMed

    Liu, Zhi; Chen, Ci; Zhang, Yun; Chen, C L P

    2015-03-01

    To achieve an excellent dual-arm coordination of the humanoid robot, it is essential to deal with the nonlinearities existing in the system dynamics. The literatures so far on the humanoid robot control have a common assumption that the problem of output hysteresis could be ignored. However, in the practical applications, the output hysteresis is widely spread; and its existing limits the motion/force performances of the robotic system. In this paper, an adaptive neural control scheme, which takes the unknown output hysteresis and computational efficiency into account, is presented and investigated. In the controller design, the prior knowledge of system dynamics is assumed to be unknown. The motion error is guaranteed to converge to a small neighborhood of the origin by Lyapunov's stability theory. Simultaneously, the internal force is kept bounded and its error can be made arbitrarily small.

  12. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

    PubMed Central

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-01-01

    Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777

  13. Biped Robot Gait Planning Based on 3D Linear Inverted Pendulum Model

    NASA Astrophysics Data System (ADS)

    Yu, Guochen; Zhang, Jiapeng; Bo, Wu

    2018-01-01

    In order to optimize the biped robot’s gait, the biped robot’s walking motion is simplify to the 3D linear inverted pendulum motion mode. The Center of Mass (CoM) locus is determined from the relationship between CoM and the Zero Moment Point (ZMP) locus. The ZMP locus is planned in advance. Then, the forward gait and lateral gait are simplified as connecting rod structure. Swing leg trajectory using B-spline interpolation. And the stability of the walking process is discussed in conjunction with the ZMP equation. Finally the system simulation is carried out under the given conditions to verify the validity of the proposed planning method.

  14. Robonaut 2 and You: Specifying and Executing Complex Operations

    NASA Technical Reports Server (NTRS)

    Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia

    2017-01-01

    Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.

  15. RoboJockey: Designing an Entertainment Experience with Robots.

    PubMed

    Yoshida, Shigeo; Shirokura, Takumi; Sugiura, Yuta; Sakamoto, Daisuke; Ono, Tetsuo; Inami, Masahiko; Igarashi, Takeo

    2016-01-01

    The RoboJockey entertainment system consists of a multitouch tabletop interface for multiuser collaboration. RoboJockey enables a user to choreograph a mobile robot or a humanoid robot by using a simple visual language. With RoboJockey, a user can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements and coordinate the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. RoboJockey was demonstrated to the public during two pilot studies, and the authors observed users' behavior. Here, they report the results of their observations and discuss the RoboJockey entertainment experience.

  16. Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context

    DTIC Science & Technology

    2016-12-01

    Case Study Our approach to coordination was initially motivated and developed in RoboCup soccer games. In fact, it has been first deployed on a team of...features a rather accurate model of the behavior and capabilities of the humanoid robot in the field. In the soccer case study , our goal is to...on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment. I. INTRODUCTION

  17. Electroactive polymer and shape memory alloy actuators in biomimetics and humanoids

    NASA Astrophysics Data System (ADS)

    Tadesse, Yonas

    2013-04-01

    There is a strong need to replicate natural muscles with artificial materials as the structure and function of natural muscle is optimum for articulation. Particularly, the cylindrical shape of natural muscle fiber and its interconnected structure promote the critical investigation of artificial muscles geometry and implementation in the design phase of certain platforms. Biomimetic robots and Humanoid Robot heads with Facial Expressions (HRwFE) are some of the typical platforms that can be used to study the geometrical effects of artificial muscles. It has been shown that electroactive polymer and shape memory alloy artificial muscles and their composites are some of the candidate materials that may replicate natural muscles and showed great promise for biomimetics and humanoid robots. The application of these materials to these systems reveals the challenges and associated technologies that need to be developed in parallel. This paper will focus on the computer aided design (CAD) models of conductive polymer and shape memory alloys in various biomimetic systems and Humanoid Robot with Facial Expressions (HRwFE). The design of these systems will be presented in a comparative manner primarily focusing on three critical parameters: the stress, the strain and the geometry of the artificial muscle.

  18. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  19. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots

    PubMed Central

    Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524

  20. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  1. How Do Young Children Deal with Hybrids of Living and Non-Living Things: The Case of Humanoid Robots

    ERIC Educational Resources Information Center

    Saylor, Megan M.; Somanader, Mark; Levin, Daniel T.; Kawamura, Kazuhiko

    2010-01-01

    In this experiment, we tested children's intuitions about entities that bridge the contrast between living and non-living things. Three- and four-year-olds were asked to attribute a range of properties associated with living things and machines to novel category-defying complex artifacts (humanoid robots), a familiar living thing (a girl), and a…

  2. Can Robotic Interaction Improve Joint Attention Skills?

    PubMed Central

    Zheng, Zhi; Swanson, Amy R.; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A.; Weitlauf, Amy F.; Sarkar, Nilanjan

    2013-01-01

    Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms. PMID:24014194

  3. Can Robotic Interaction Improve Joint Attention Skills?

    PubMed

    Warren, Zachary E; Zheng, Zhi; Swanson, Amy R; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A; Weitlauf, Amy F; Sarkar, Nilanjan

    2015-11-01

    Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms.

  4. Building Robota, a mini-humanoid robot for the rehabilitation of children with autism.

    PubMed

    Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin

    2007-01-01

    The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot to assess children's imitation ability and to teach children simple coordinated behaviors. In this article, the authors review the recent technological developments that have made the Robota robots suitable for use with children with autism. They critically appraise the main outcomes of two sets of behavioral studies conducted with Robota and discuss how these results inform future development of the Robota robots and robots in general for the rehabilitation of children with complex developmental disabilities.

  5. Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.

    PubMed

    Leylavi Shoushtari, Ali

    2016-01-01

    Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion.

  6. Working and Learning with Knowledge in the Lobes of a Humanoid's Mind

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David

    2003-01-01

    Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).

  7. Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin.

    PubMed

    Roncone, Alessandro; Hoffmann, Matej; Pattacini, Ugo; Fadiga, Luciano; Metta, Giorgio

    2016-01-01

    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.

  8. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    PubMed

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  9. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    PubMed Central

    Alexandrov, Alexei V.; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A.; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary”) movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices. PMID:28487646

  10. Humanoid Robot Control System Balance Dance Indonesia and Reader Filters Using Complementary Angle Values

    NASA Astrophysics Data System (ADS)

    Sholihin; Susanti, Eka

    2018-02-01

    The development of increasingly advanced technology, make people want to be more developed and curiosity to know more to determine the development of advanced technology. Robot is a tool that can be used as a tool for people who have several advantages. Basically humanoid robot is a robot that resembles a human being with all the driving structure. In the application of this humanoid robot manufacture researchers use MPU6050 module which is an important component of the robot because it can provide a response to the angle reference axis X and Y reference axis, the reading corner still has noise if not filtered out beforehand. On the other hand the use of Complementary filters are the answer to reduce the noise. By arranging the filter coefficients and time sampling filter that affects the signal updates corner. The angle value will be the value of the sensor to the process to the PID system which generates output values that are integrated with the servo pulses. Researchers will test to get a reading of the most stable angle for this experiment is the "a" or the value of the filter coefficient = 0.96 and "dt" or the sampling time = 10 ms.

  11. Humanoid Robots: A New Kind of Tool

    DTIC Science & Technology

    2000-01-01

    Breazeal (Ferrell), R. Irie, C. C. Kemp, M. J. Marjanovic , B. Scassellati, M. M. Williamson, Alternate Essences of Intelligence, AAAI 1998. 2 R. A. Brooks, C...Breazeal, M. J. Marjanovic , B. Scassellati, M. M. Williamson, The Cog Project: Building a Humanoid Robot, Computation fbr Metaphors, Analogy and...Functions, Vol. 608, 1990, New York Academy of Sciences, pp. 637-676. 7 M. J. Marjanovic , B. Scassellati, M. M. Williamson, Self-Taught Visually-Guided

  12. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    PubMed Central

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  13. Feasibility of using a humanoid robot to elicit communicational response in children with mild autism

    NASA Astrophysics Data System (ADS)

    Malik, Norjasween Abdul; Shamsuddin, Syamimi; Yussof, Hanafiah; Azfar Miskam, Mohd; Che Hamid, Aminullah

    2013-12-01

    Research evidences are accumulating with regards to the potential use of robots for the rehabilitation of children with autism. The purpose of this paper is to elaborate on the results of communicational response in two children with autism during interaction with the humanoid robot NAO. Both autistic subjects in this study have been diagnosed with mild autism. Following the outcome from our first pilot study; the aim of this current experiment is to explore the application of NAO robot to engage with a child and further teach about emotions through a game-centered and song-based approach. The experiment procedure involved interaction between humanoid robot NAO with each child through a series of four different modules. The observation items are based on ten items selected and referenced to GARS-2 (Gilliam Autism Rating Scale-second edition) and also input from clinicians and therapists. The results clearly indicated that both of the children showed optimistic response through the interaction. Negative responses such as feeling scared or shying away from the robot were not detected. Two-way communication between the child and robot in real time significantly gives positive impact in the responses towards the robot. To conclude, it is feasible to include robot-based interaction specifically to elicit communicational response as a part of the rehabilitation intervention of children with autism.

  14. Humanoids for Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Myers, John; Newton, Jason; Csaszar, Ambrus; Gan, Quan; Hidalgo, Tim; Moore, Jeff; Sandoval, Steven; Xu, Jiajing; Schon, Aaron; hide

    2006-01-01

    Human-like shape makes humanoids well suited for being fostered/taught by humans, and for learning from humans, which we consider the best means to develop cognitive and perceptual/motor skills for truly intelligent, cognitive robots.

  15. Honda humanoid robots development.

    PubMed

    Hirose, Masato; Ogawa, Kenichi

    2007-01-15

    Honda has been doing research on robotics since 1986 with a focus upon bipedal walking technology. The research started with straight and static walking of the first prototype two-legged robot. Now, the continuous transition from walking in a straight line to making a turn has been achieved with the latest humanoid robot ASIMO. ASIMO is the most advanced robot of Honda so far in the mechanism and the control system. ASIMO's configuration allows it to operate freely in the human living space. It could be of practical help to humans with its ability of five-finger arms as well as its walking function. The target of further development of ASIMO is to develop a robot to improve life in human society. Much development work will be continued both mechanically and electronically, staying true to Honda's 'challenging spirit'.

  16. Pilot clinical application of an adaptive robotic system for young children with autism

    PubMed Central

    Bekele, Esubalew; Crittendon, Julie A; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E

    2013-01-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms. PMID:24104517

  17. Sociable Machines: Expressive Social Exchange between Humans and Robots

    DTIC Science & Technology

    2000-05-01

    many occasions about theories on emotion. I’ve cornered Robert Irie’s again and again about auditory processing. I’ve bugged Matto Marjanovic throughout...development at the MIT Artificial Intel- ligence Lab (Brooks, Breazeal, Marjanovic , Scassellati & Williamson 1999). Cog is a general purpose humanoid...RA-2, 253-262. Brooks, R. A., Breazeal, C., Marjanovic , M., Scassellati, B. & Williamson, M. M. (1999), The Cog Project: Building a Humanoid Robot, in

  18. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  19. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    PubMed

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  20. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  1. Kinematics and dynamics analysis of a quadruped walking robot with parallel leg mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Hongbo; Sang, Lingfeng; Hu, Xing; Zhang, Dianfan; Yu, Hongnian

    2013-09-01

    It is desired to require a walking robot for the elderly and the disabled to have large capacity, high stiffness, stability, etc. However, the existing walking robots cannot achieve these requirements because of the weight-payload ratio and simple function. Therefore, Improvement of enhancing capacity and functions of the walking robot is an important research issue. According to walking requirements and combining modularization and reconfigurable ideas, a quadruped/biped reconfigurable walking robot with parallel leg mechanism is proposed. The proposed robot can be used for both a biped and a quadruped walking robot. The kinematics and performance analysis of a 3-UPU parallel mechanism which is the basic leg mechanism of a quadruped walking robot are conducted and the structural parameters are optimized. The results show that performance of the walking robot is optimal when the circumradius R, r of the upper and lower platform of leg mechanism are 161.7 mm, 57.7 mm, respectively. Based on the optimal results, the kinematics and dynamics of the quadruped walking robot in the static walking mode are derived with the application of parallel mechanism and influence coefficient theory, and the optimal coordination distribution of the dynamic load for the quadruped walking robot with over-determinate inputs is analyzed, which solves dynamic load coupling caused by the branches’ constraint of the robot in the walk process. Besides laying a theoretical foundation for development of the prototype, the kinematics and dynamics studies on the quadruped walking robot also boost the theoretical research of the quadruped walking and the practical applications of parallel mechanism.

  2. Adaptive Language Games with Robots

    NASA Astrophysics Data System (ADS)

    Steels, Luc

    2010-11-01

    This paper surveys recent research into language evolution using computer simulations and robotic experiments. This field has made tremendous progress in the past decade going from simple simulations of lexicon formation with animallike cybernetic robots to sophisticated grammatical experiments with humanoid robots.

  3. Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    PubMed Central

    Roncone, Alessandro; Fadiga, Luciano; Metta, Giorgio

    2016-01-01

    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement. PMID:27711136

  4. Pilot clinical application of an adaptive robotic system for young children with autism.

    PubMed

    Bekele, Esubalew; Crittendon, Julie A; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E

    2014-07-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms. © The Author(s) 2013.

  5. Comparison of kinematic and dynamic leg trajectory optimization techniques for biped robot locomotion

    NASA Astrophysics Data System (ADS)

    Khusainov, R.; Klimchik, A.; Magid, E.

    2017-01-01

    The paper presents comparison analysis of two approaches in defining leg trajectories for biped locomotion. The first one operates only with kinematic limitations of leg joints and finds the maximum possible locomotion speed for given limits. The second approach defines leg trajectories from the dynamic stability point of view and utilizes ZMP criteria. We show that two methods give different trajectories and demonstrate that trajectories based on pure dynamic optimization cannot be realized due to joint limits. Kinematic optimization provides unstable solution which can be balanced by upper body movement.

  6. An asymptotic solution to a passive biped walker model

    NASA Astrophysics Data System (ADS)

    Yudaev, Sergey A.; Rachinskii, Dmitrii; Sobolev, Vladimir A.

    2017-02-01

    We consider a simple model of a passive dynamic biped robot walker with point feet and legs without knee. The model is a switched system, which includes an inverted double pendulum. Robot’s gait and its stability depend on parameters such as the slope of the ramp, the length of robot’s legs, and the mass distribution along the legs. We present an asymptotic solution of the model. The first correction to the zero order approximation is shown to agree with the numerical solution for a limited parameter range.

  7. Using a cognitive architecture for general purpose service robot control

    NASA Astrophysics Data System (ADS)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  8. Dissociated emergent-response system and fine-processing system in human neural network and a heuristic neural architecture for autonomous humanoid robots.

    PubMed

    Yan, Xiaodan

    2010-01-01

    The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.

  9. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  10. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  11. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots

    PubMed Central

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people’s social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people’s judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior – (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people’s intentional stance toward the robot was in this case very similar to their stance toward the human. PMID:29184519

  12. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    PubMed

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Gyeong-Moon Park; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Yoo, Yong-Ho; Park, Gyeong-Moon; Kim, Jong-Hwan; Kim, Deok-Hwa

    2018-06-01

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  13. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  14. Evaluating alternative gait strategies using evolutionary robotics.

    PubMed

    Sellers, William I; Dennis, Louise A; W -J, Wang; Crompton, Robin H

    2004-05-01

    Evolutionary robotics is a branch of artificial intelligence concerned with the automatic generation of autonomous robots. Usually the form of the robot is predefined and various computational techniques are used to control the machine's behaviour. One aspect is the spontaneous generation of walking in legged robots and this can be used to investigate the mechanical requirements for efficient walking in bipeds. This paper demonstrates a bipedal simulator that spontaneously generates walking and running gaits. The model can be customized to represent a range of hominoid morphologies and used to predict performance parameters such as preferred speed and metabolic energy cost. Because it does not require any motion capture data it is particularly suitable for investigating locomotion in fossil animals. The predictions for modern humans are highly accurate in terms of energy cost for a given speed and thus the values predicted for other bipeds are likely to be good estimates. To illustrate this the cost of transport is calculated for Australopithecus afarensis. The model allows the degree of maximum extension at the knee to be varied causing the model to adopt walking gaits varying from chimpanzee-like to human-like. The energy costs associated with these gait choices can thus be calculated and this information used to evaluate possible locomotor strategies in early hominids.

  15. Evaluating alternative gait strategies using evolutionary robotics

    PubMed Central

    Sellers, William I; Dennis, Louise A; Wang, W -J; Crompton, Robin H

    2004-01-01

    Evolutionary robotics is a branch of artificial intelligence concerned with the automatic generation of autonomous robots. Usually the form of the robot is predefined and various computational techniques are used to control the machine's behaviour. One aspect is the spontaneous generation of walking in legged robots and this can be used to investigate the mechanical requirements for efficient walking in bipeds. This paper demonstrates a bipedal simulator that spontaneously generates walking and running gaits. The model can be customized to represent a range of hominoid morphologies and used to predict performance parameters such as preferred speed and metabolic energy cost. Because it does not require any motion capture data it is particularly suitable for investigating locomotion in fossil animals. The predictions for modern humans are highly accurate in terms of energy cost for a given speed and thus the values predicted for other bipeds are likely to be good estimates. To illustrate this the cost of transport is calculated for Australopithecus afarensis. The model allows the degree of maximum extension at the knee to be varied causing the model to adopt walking gaits varying from chimpanzee-like to human-like. The energy costs associated with these gait choices can thus be calculated and this information used to evaluate possible locomotor strategies in early hominids. PMID:15198699

  16. A study of the passive gait of a compass-like biped robot: Symmetry and chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goswami, A.; Espiau, B.; Thuilot, B.

    1998-12-01

    The focus of this work is a systematic study of the passive gait of a compass-like planar, biped robot on inclined slopes. The robot is kinematically equivalent to a double pendulum, possessing two kneeless legs with point masses and a third point mass at the hip joint. Three parameters, namely, the ground-slope angle and the normalized mass and length of the robot describe its gait. The authors show that in response to a continuous change in any one of its parameters, the symmetric and steady stable gait of the unpowered robot gradually evolves through a regime of bifurcations characterized bymore » progressively complicated asymmetric gaits, eventually arriving at an apparently chaotic gait where not two steps are identical. The robot can maintain this gait indefinitely. A necessary (but not sufficient) condition for the stability of such gaits is the contraction of the phase-fluid volume. For this frictionless robot, the volume contraction, which the authors compute, is caused by the dissipative effects of the ground-impact model. In the chaotic regime, the fractal dimension of the robot`s strange attractor (2.07) compared to its state-space dimension (4) also reveals strong contraction. The authors present a novel graphical technique based on the first return map that compactly captures the entire evolution of the gait, from symmetry to chaos. Additional passive dissipative elements in the robot joint results in a significant improvement in the stability and the versatility of the gait, and provide a rich repertoire for simple controls laws.« less

  17. A cortically-inspired model for inverse kinematics computation of a humanoid finger with mechanically coupled joints.

    PubMed

    Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A

    2016-05-19

    The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.

  18. Robust sensorimotor representation to physical interaction changes in humanoid motion learning.

    PubMed

    Shimizu, Toshihiko; Saegusa, Ryo; Ikemoto, Shuhei; Ishiguro, Hiroshi; Metta, Giorgio

    2015-05-01

    This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.

  19. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    PubMed

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.

  20. Recent trends in humanoid robotics research: scientific background, applications, and implications.

    PubMed

    Solis, Jorge; Takanishi, Atsuo

    2010-11-01

    Even though the market size is still small at this moment, applied fields of robots are gradually spreading from the manufacturing industry to the others as one of the important components to support an aging society. For this purpose, the research on human-robot interaction (HRI) has been an emerging topic of interest for both basic research and customer application. The studies are especially focused on behavioral and cognitive aspects of the interaction and the social contexts surrounding it. As a part of these studies, the term of "roboethics" has been introduced as an approach to discuss the potentialities and the limits of robots in relation to human beings. In this article, we describe the recent research trends on the field of humanoid robotics. Their principal applications and their possible impact are discussed.

  1. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  2. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  3. Reaching and Grasping a Glass of Water by Locked-In ALS Patients through a BCI-Controlled Humanoid Robot

    PubMed Central

    Spataro, Rossella; Chella, Antonio; Allison, Brendan; Giardina, Marcello; Sorbello, Rosario; Tramonte, Salvatore; Guger, Christoph; La Bella, Vincenzo

    2017-01-01

    Locked-in Amyotrophic Lateral Sclerosis (ALS) patients are fully dependent on caregivers for any daily need. At this stage, basic communication and environmental control may not be possible even with commonly used augmentative and alternative communication devices. Brain Computer Interface (BCI) technology allows users to modulate brain activity for communication and control of machines and devices, without requiring a motor control. In the last several years, numerous articles have described how persons with ALS could effectively use BCIs for different goals, usually spelling. In the present study, locked-in ALS patients used a BCI system to directly control the humanoid robot NAO (Aldebaran Robotics, France) with the aim of reaching and grasping a glass of water. Four ALS patients and four healthy controls were recruited and trained to operate this humanoid robot through a P300-based BCI. A few minutes training was sufficient to efficiently operate the system in different environments. Three out of the four ALS patients and all controls successfully performed the task with a high level of accuracy. These results suggest that BCI-operated robots can be used by locked-in ALS patients as an artificial alter-ego, the machine being able to move, speak and act in his/her place. PMID:28298888

  4. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  5. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  6. Note: Reconfigurable pelvis mechanism for efficient multi-locomotion: Biped and quadruped walking

    NASA Astrophysics Data System (ADS)

    Yoon, Byungho; Kim, Soohyun

    2017-12-01

    A reconfigurable pelvis mechanism that can change its length for multi-locomotion robot is introduced. From the characteristics of animals that walk in a bipedal or quadrupedal manner, we found that the length of the pelvis for each type of locomotion is related to the efficiency and stability of walking. We demonstrated the effectiveness of this mechanism in biped and quadruped walking through comparison of accumulated power of consumption. We also examined the changes of the supporting polygon according to the length of the pelvis during quadruped walking in terms of stability.

  7. Note: Reconfigurable pelvis mechanism for efficient multi-locomotion: Biped and quadruped walking.

    PubMed

    Yoon, Byungho; Kim, Soohyun

    2017-12-01

    A reconfigurable pelvis mechanism that can change its length for multi-locomotion robot is introduced. From the characteristics of animals that walk in a bipedal or quadrupedal manner, we found that the length of the pelvis for each type of locomotion is related to the efficiency and stability of walking. We demonstrated the effectiveness of this mechanism in biped and quadruped walking through comparison of accumulated power of consumption. We also examined the changes of the supporting polygon according to the length of the pelvis during quadruped walking in terms of stability.

  8. Inventing Japan's 'robotics culture': the repeated assembly of science, technology, and culture in social robotics.

    PubMed

    Sabanović, Selma

    2014-06-01

    Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.

  9. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  10. Method and apparatus for automatic control of a humanoid robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  11. Human brain spots emotion in non humanoid robots

    PubMed Central

    Foucher, Aurélie; Jouvent, Roland; Nadel, Jacqueline

    2011-01-01

    The computation by which our brain elaborates fast responses to emotional expressions is currently an active field of brain studies. Previous studies have focused on stimuli taken from everyday life. Here, we investigated event-related potentials in response to happy vs neutral stimuli of human and non-humanoid robots. At the behavioural level, emotion shortened reaction times similarly for robotic and human stimuli. Early P1 wave was enhanced in response to happy compared to neutral expressions for robotic as well as for human stimuli, suggesting that emotion from robots is encoded as early as human emotion expression. Congruent with their lower faceness properties compared to human stimuli, robots elicited a later and lower N170 component than human stimuli. These findings challenge the claim that robots need to present an anthropomorphic aspect to interact with humans. Taken together, such results suggest that the early brain processing of emotional expressions is not bounded to human-like arrangements embodying emotion. PMID:20194513

  12. GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.

    PubMed

    Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin

    2014-02-01

    We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.

  13. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  14. Embedded diagnostic, prognostic, and health management system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.

  15. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid

    PubMed Central

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot. PMID:26998923

  16. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  17. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    DTIC Science & Technology

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  18. A direct methanol fuel cell system to power a humanoid robot

    NASA Astrophysics Data System (ADS)

    Joh, Han-Ik; Ha, Tae Jung; Hwang, Sang Youp; Kim, Jong-Ho; Chae, Seung-Hoon; Cho, Jae Hyung; Prabhuram, Joghee; Kim, Soo-Kil; Lim, Tae-Hoon; Cho, Baek-Kyu; Oh, Jun-Ho; Moon, Sang Heup; Ha, Heung Yong

    In this study, a direct methanol fuel cell (DMFC) system, which is the first of its kind, has been developed to power a humanoid robot. The DMFC system consists of a stack, a balance of plant (BOP), a power management unit (PMU), and a back-up battery. The stack has 42 unit cells and is able to produce about 400 W at 19.3 V. The robot is 125 cm tall, weighs 56 kg, and consumes 210 W during normal operation. The robot is integrated with the DMFC system that powers the robot in a stable manner for more than 2 h. The power consumption by the robot during various motions is studied, and load sharing between the fuel cell and the back-up battery is also observed. The loss of methanol feed due to crossover and evaporation amounts to 32.0% and the efficiency of the DMFC system in terms of net electric power is 22.0%.

  19. Grounding Action Words in the Sensorimotor Interaction with the World: Experiments with a Simulated iCub Humanoid Robot

    PubMed Central

    Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony

    2010-01-01

    This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503

  20. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  1. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  2. Robot Rocket Rally

    NASA Image and Video Library

    2014-03-14

    CAPE CANAVERAL, Fla. – Students gather to watch as a DARwin-OP miniature humanoid robot from Virginia Tech Robotics demonstrates its soccer abilities at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett

  3. Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Lee, Jih-Hsien; Chao, Po-Yao; Wang, Chin-Yeh; Chen, Gwo-Dong

    2010-01-01

    As robot technologies develop, many researchers have tried to use robots to support education. Studies have shown that robots can help students develop problem-solving abilities and learn computer programming, mathematics, and science. However, few studies discuss the use of robots to facilitate the teaching of second languages. We discuss whether…

  4. Human-Robot Interaction: Status and Challenges.

    PubMed

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  5. Artificial humanoid for the elderly people.

    PubMed

    Simou, Panagiota; Alexiou, Athanasios; Tiligadis, Konstantinos

    2015-01-01

    While frailty and other multi-scale factors have to be correlated during a geriatric assessment, few prototype robots have already been developed in order to measure and provide real-time information, concerning elderly daily activities. Cognitive impairment and alterations on daily functions should be immediate recognized from caregivers, in order to be prevented and probably treated. In this chapter we recognize the necessity of artificial robots during the personal service of the elderly population, not only as a mobile laboratory-geriatrician, but mainly as a socialized digital humanoid able to develop social behavior and activate memories and emotions.

  6. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    PubMed Central

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  7. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement.

    PubMed

    Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.

  8. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement

    PubMed Central

    Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554

  9. Robot Rocket Rally

    NASA Image and Video Library

    2014-03-14

    CAPE CANAVERAL, Fla. – A miniature humanoid robot known as DARwin-OP, from Virginia Tech Robotics, plays soccer with a red tennis ball for a crowd of students at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett

  10. Model-based Robotic Dynamic Motion Control for the Robonaut 2 Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hulse, Aaron M.; Taylor, Ross C.; Curtis, Andrew W.; Gooding, Dustin R.; Thackston, Allison

    2013-01-01

    Robonaut 2 (R2), an upper-body dexterous humanoid robot, has been undergoing experimental trials on board the International Space Station (ISS) for more than a year. R2 will soon be upgraded with two climbing appendages, or legs, as well as a new integrated model-based control system. This control system satisfies two important requirements; first, that the robot can allow humans to enter its workspace during operation and second, that the robot can move its large inertia with enough precision to attach to handrails and seat track while climbing around the ISS. This is achieved by a novel control architecture that features an embedded impedance control law on the motor drivers called Multi-Loop control which is tightly interfaced with a kinematic and dynamic coordinated control system nicknamed RoboDyn that resides on centralized processors. This paper presents the integrated control algorithm as well as several test results that illustrate R2's safety features and performance.

  11. Triggering social interactions: chimpanzees respond to imitation by a humanoid robot and request responses from it.

    PubMed

    Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A

    2014-05-01

    Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.

  12. Artificial heart for humanoid robot using coiled SMA actuators

    NASA Astrophysics Data System (ADS)

    Potnuru, Akshay; Tadesse, Yonas

    2015-03-01

    Previously, we have presented the design and characterization of artificial heart using cylindrical shape memory alloy (SMA) actuators for humanoids [1]. The robotic heart was primarily designed to pump a blood-like fluid to parts of the robot such as the face to simulate blushing or anger by the use of elastomeric substrates for the transport of fluids. It can also be used for other applications. In this paper, we present an improved design by using high strain coiled SMAs and a novel pumping mechanism that uses sequential actuation to create peristalsis-like motions, and hence pump the fluid. Various placements of actuators will be investigated with respect to the silicone elastomeric body. This new approach provides a better performance in terms of the fluid volume pumped.

  13. Actuator and electronics packaging for extrinsic humanoid hand

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Diftler, Myron A. (Inventor); Reich, David M. (Inventor); Askew, Scott R. (Inventor)

    2013-01-01

    The lower arm assembly for a humanoid robot includes an arm support having a first side and a second side, a plurality of wrist actuators mounted to the first side of the arm support, a plurality of finger actuators mounted to the second side of the arm support and a plurality of electronics also located on the first side of the arm support.

  14. Rotary Series Elastic Actuator

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Mehling, Joshua S. (Inventor); Parsons, Adam H. (Inventor); Griffith, Bryan Kristian (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Davis, Donald R. (Inventor); Ambrose, Robert O. (Inventor); Junkin, Lucien Q. (Inventor)

    2013-01-01

    A rotary actuator assembly is provided for actuation of an upper arm assembly for a dexterous humanoid robot. The upper arm assembly for the humanoid robot includes a plurality of arm support frames each defining an axis. A plurality of rotary actuator assemblies are each mounted to one of the plurality of arm support frames about the respective axes. Each rotary actuator assembly includes a motor mounted about the respective axis, a gear drive rotatably connected to the motor, and a torsion spring. The torsion spring has a spring input that is rotatably connected to an output of the gear drive and a spring output that is connected to an output for the joint.

  15. Rotary series elastic actuator

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Mehling, Joshua S. (Inventor); Parsons, Adam H. (Inventor); Griffith, Bryan Kristian (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Davis, Donald R. (Inventor); Ambrose, Robert O. (Inventor); Junkin, Lucien Q. (Inventor)

    2012-01-01

    A rotary actuator assembly is provided for actuation of an upper arm assembly for a dexterous humanoid robot. The upper arm assembly for the humanoid robot includes a plurality of arm support frames each defining an axis. A plurality of rotary actuator assemblies are each mounted to one of the plurality of arm support frames about the respective axes. Each rotary actuator assembly includes a motor mounted about the respective axis, a gear drive rotatably connected to the motor, and a torsion spring. The torsion spring has a spring input that is rotatably connected to an output of the gear drive and a spring output that is connected to an output for the joint.

  16. Upper Torso Control for HOAP-2 Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Sandoval, Steven P.

    2005-01-01

    Humanoid robots have similar physical builds and motion patterns as humans. Not only does this provide a suitable operating environment for the humanoid but it also opens up many research doors on how humans function. The overall objective is replacing humans operating in unsafe environments. A first target application is assembly of structures for future lunar-planetary bases. The initial development platform is a Fujitsu HOAP-2 humanoid robot. The goal for the project is to demonstrate the capability of a HOAP-2 to autonomously construct a cubic frame using provided tubes and joints. This task will require the robot to identify several items, pick them up, transport them to the build location, then properly assemble the structure. The ability to grasp and assemble the pieces will require improved motor control and the addition of tactile feedback sensors. In recent years, learning-based control is becoming more and more popular; for implementing this method we will be using the Adaptive Neural Fuzzy Inference System (ANFIS). When using neural networks for control, no complex models of the system must be constructed in advance-only input/output relationships are required to model the system.

  17. Role Transfer for Robot Tasking

    DTIC Science & Technology

    2002-04-01

    Artificial Intel- ligence (AAAI-98). Brooks, R. A., Breazeal, C., Marjanovic , M. & Scassellati, B. (1999). The Cog project: Building a humanoid robot...scale investment in knowledge infrastructure, Communications of the ACM 38(11): 33-38. 33 Marjanovic , M. (1995). Learning functional maps between

  18. Design and development an insect-inspired humanoid gripper that is structurally sound, yet very flexible

    NASA Astrophysics Data System (ADS)

    Hajjaj, S.; Pun, N.

    2013-06-01

    One of the biggest challenges in mechanical robotics design is the balance between structural integrity and flexibility. An industrial robotic gripper could be technically advanced, however it contains only 1 Degree of Freedom (DOF). If one is to add more DOFs the design would become complex. On the other hand, the human wrist and fingers contain 23 DOFs, and is very lightweight and highly flexible. Robotics are becoming more and more part of our social life, they are more and more being incorporated in social, medical, and personal application. Therefore, for such robots to be effective, they need to mimic human performance, both in performance as well as in mechanical design. In this work, a Humanoid Gripper is designed and built to mimic a simplified version of a human wrist and fingers. This is attempted by mimicking insect and human designs of grippes. The main challenge was to insure that the gripper is structurally sound, but at the same time flexible and lightweight. A combination of light weight material and a unique design of finger actuators were applied. The gripper is controlled by a PARALLAX servo controller 28823 (PSCI), which mounted on the assembly itself. At the end, a 6 DOF humanoid gripper made of lightweight material, similar in size to the human arm, and is able to carry a weight of 1 Kg has been designed and built.

  19. HET2 Overview

    NASA Technical Reports Server (NTRS)

    Fong, Terrence W.; Bualat, Maria Gabriele; Diftler, Myron A.

    2015-01-01

    2015 mid-year review charts of the Human Exploration Telerobotics 2 project that describe the Astrobee free-flying robot and the Robonaut 2 humanoid robot. A planned replacement for Synchronized Position Hold, Engage, Reorient, Experimental Satellite (SPHERES), which is currently in use in the International Space Station (ISS).

  20. Challenges in Building Robots that Imitate People

    DTIC Science & Technology

    2000-01-01

    pages 25 40, 1998. R. Brooks, C. Breazeal (Ferrell), R. Irie, C. Kemp, M. Marjanovic , B. Scassellati, & M. Williamson. Alternative essences of...Breazeal (Ferrell), M. Marjanovic , B. Scassellati, and M. Williamson. The Cog project: building a humanoid robot. In C. Nehaniv, editor, Computationjbr

  1. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    ERIC Educational Resources Information Center

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  2. Investigating the ability to read others' intentions using humanoid robots.

    PubMed

    Sciutti, Alessandra; Ansuini, Caterina; Becchio, Cristina; Sandini, Giulio

    2015-01-01

    The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.

  3. Tendon Driven Finger Actuation System

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Reich, David M. (Inventor); Bridgwater, Lyndon (Inventor); Linn, Douglas Martin (Inventor); Askew, Scott R. (Inventor); Diftler, Myron A. (Inventor); Platt, Robert (Inventor); Hargrave, Brian (Inventor); Valvo, Michael C. (Inventor); Abdallah, Muhammad E. (Inventor); hide

    2013-01-01

    A humanoid robot includes a robotic hand having at least one finger. An actuation system for the robotic finger includes an actuator assembly which is supported by the robot and is spaced apart from the finger. A tendon extends from the actuator assembly to the at least one finger and ends in a tendon terminator. The actuator assembly is operable to actuate the tendon to move the tendon terminator and, thus, the finger.

  4. Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

    PubMed

    Li, Mengfan; Li, Wei; Zhou, Huihui

    2016-02-01

    Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.

  5. A pilot study for robot appearance preferences among high-functioning individuals with autism spectrum disorder: Implications for therapeutic use

    PubMed Central

    Warren, Zachary; Muramatsu, Taro; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Miyao, Masutomo; Nakano, Mitsuko; Mizushima, Sakae; Wakita, Yujin; Ishiguro, Hiroshi; Mimura, Masaru; Minabe, Yoshio; Kikuchi, Mitsuru

    2017-01-01

    Recent rapid technological advances have enabled robots to fulfill a variety of human-like functions, leading researchers to propose the use of such technology for the development and subsequent validation of interventions for individuals with autism spectrum disorder (ASD). Although a variety of robots have been proposed as possible therapeutic tools, the physical appearances of humanoid robots currently used in therapy with these patients are highly varied. Very little is known about how these varied designs are experienced by individuals with ASD. In this study, we systematically evaluated preferences regarding robot appearance in a group of 16 individuals with ASD (ages 10–17). Our data suggest that there may be important differences in preference for different types of robots that vary according to interaction type for individuals with ASD. Specifically, within our pilot sample, children with higher-levels of reported ASD symptomatology reported a preference for specific humanoid robots to those perceived as more mechanical or mascot-like. The findings of this pilot study suggest that preferences and reactions to robotic interactions may vary tremendously across individuals with ASD. Future work should evaluate how such differences may be systematically measured and potentially harnessed to facilitate meaningful interactive and intervention paradigms. PMID:29028837

  6. An artificial nociceptor based on a diffusive memristor.

    PubMed

    Yoon, Jung Ho; Wang, Zhongrui; Kim, Kyung Min; Wu, Huaqiang; Ravichandran, Vignesh; Xia, Qiangfei; Hwang, Cheol Seong; Yang, J Joshua

    2018-01-29

    A nociceptor is a critical and special receptor of a sensory neuron that is able to detect noxious stimulus and provide a rapid warning to the central nervous system to start the motor response in the human body and humanoid robotics. It differs from other common sensory receptors with its key features and functions, including the "no adaptation" and "sensitization" phenomena. In this study, we propose and experimentally demonstrate an artificial nociceptor based on a diffusive memristor with critical dynamics for the first time. Using this artificial nociceptor, we further built an artificial sensory alarm system to experimentally demonstrate the feasibility and simplicity of integrating such novel artificial nociceptor devices in artificial intelligence systems, such as humanoid robots.

  7. Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Linn, Douglas M. (Inventor); Mehling, Joshua S. (Inventor); Radford, Nicolaus A. (Inventor); Bridgwater, Lyndon (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Davis, Donald R. (Inventor); Diftler, Myron A. (Inventor); Platt, Robert (Inventor); hide

    2013-01-01

    A humanoid robot includes a torso, a pair of arms, two hands, a neck, and a head. The torso extends along a primary axis and presents a pair of shoulders. The pair of arms movably extend from a respective one of the pair of shoulders. Each of the arms has a plurality of arm joints. The neck movably extends from the torso along the primary axis. The neck has at least one neck joint. The head movably extends from the neck along the primary axis. The head has at least one head joint. The shoulders are canted toward one another at a shrug angle that is defined between each of the shoulders such that a workspace is defined between the shoulders.

  8. Modeling, Control and Simulation of Three-Dimensional Robotic Systems with Applications to Biped Locomotion.

    NASA Astrophysics Data System (ADS)

    Zheng, Yuan-Fang

    A three-dimensional, five link biped system is established. Newton-Euler state space formulation is employed to derive the equations of the system. The constraint forces involved in the equations can be eliminated by projection onto a smaller state space system for deriving advanced control laws. A model-referenced adaptive control scheme is developed to control the system. Digital computer simulations of point to point movement are carried out to show that the model-referenced adaptive control increases the dynamic range and speeds up the response of the system in comparison with linear and nonlinear feedback control. Further, the implementation of the controller is simpler. Impact effects of biped contact with the environment are modeled and studied. The instant velocity change at the moment of impact is derived as a function of the biped state and contact speed. The effects of impact on the state, as well as constraints are studied in biped landing on heels and toes simultaneously or on toes first. Rate and nonlinear position feedback are employed for stability of the biped after the impact. The complex structure of the foot is properly modeled. A spring and dashpot pair is suggested to represent the action of plantar fascia during the impact. This action prevents the arch of the foot from collapsing. A mathematical model of the skeletal muscle is discussed. A direct relationship between the stimulus rate and the active state is established. A piecewise linear relation between the length of the contractile element and the isometric force is considered. Hill's characteristic equation is maintained for determining the actual output force during different shortening velocities. A physical threshold model is proposed for recruitment which encompasses the size principle, its manifestations and exceptions to the size principle. Finally the role of spindle feedback in stability of the model is demonstrated by study of a pair of muscles.

  9. Generation of the Human Biped Stance by a Neural Controller Able to Compensate Neurological Time Delay

    PubMed Central

    Jiang, Ping; Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun

    2016-01-01

    The development of a physiologically plausible computational model of a neural controller that can realize a human-like biped stance is important for a large number of potential applications, such as assisting device development and designing robotic control systems. In this paper, we develop a computational model of a neural controller that can maintain a musculoskeletal model in a standing position, while incorporating a 120-ms neurological time delay. Unlike previous studies that have used an inverted pendulum model, a musculoskeletal model with seven joints and 70 muscular-tendon actuators is adopted to represent the human anatomy. Our proposed neural controller is composed of both feed-forward and feedback controls. The feed-forward control corresponds to the constant activation input necessary for the musculoskeletal model to maintain a standing posture. This compensates for gravity and regulates stiffness. The developed neural controller model can replicate two salient features of the human biped stance: (1) physiologically plausible muscle activations for quiet standing; and (2) selection of a low active stiffness for low energy consumption. PMID:27655271

  10. A Robotic Therapy Case Study: Developing Joint Attention Skills with a Student on the Autism Spectrum

    ERIC Educational Resources Information Center

    Charron, Nancy; Lewis, Lundy; Craig, Michael

    2017-01-01

    The purpose of this article is to describe a possible methodology for developing joint attention skills in students with autism spectrum disorder. Co-robot therapy with the humanoid robot NAO was used to foster a student's joint attention skill development; 20-min sessions conducted once weekly during the school year were video recorded and…

  11. When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions

    ERIC Educational Resources Information Center

    Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman

    2012-01-01

    In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…

  12. Development of haptic based piezoresistive artificial fingertip: Toward efficient tactile sensing systems for humanoids.

    PubMed

    TermehYousefi, Amin; Azhari, Saman; Khajeh, Amin; Hamidon, Mohd Nizar; Tanaka, Hirofumi

    2017-08-01

    Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load. Copyright © 2017. Published by Elsevier B.V.

  13. Motor contagion during human-human and human-robot interaction.

    PubMed

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  14. Motor Contagion during Human-Human and Human-Robot Interaction

    PubMed Central

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990

  15. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  16. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner.

    PubMed

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.

  17. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner

    PubMed Central

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967

  18. Development of a novel humanoid-robot simulator for endoscope with pharyngeal reflex and real-life responses.

    PubMed

    Ueki, Masaru; Uehara, Kazutake; Isomoto, Hajime

    2018-05-15

    In recent years, there has been a growing need for skills quantification of endoscopic specialist. Various educational simulators have been created to help increase the endoscopy performance of medical students and trainees. Recent research seems to show that the use of simulators helps increase the skill level of endoscopists, while improving patient safety 1, 2 . However, previous simulators lack sufficient realism and are unable to replicate natural human reactions during endoscopy or quantify endoscopic skills. We developed a novel humanoid-robot simulator (named mikoto ® ) with pharyngeal reflexes and real-life responses to endoscopy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Cortex Inspired Model for Inverse Kinematics Computation for a Humanoid Robotic Finger

    PubMed Central

    Gentili, Rodolphe J.; Oh, Hyuk; Molina, Javier; Reggia, James A.; Contreras-Vidal, José L.

    2013-01-01

    In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands. PMID:23366569

  20. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    PubMed

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  1. A Reliability-Based Particle Filter for Humanoid Robot Self-Localization in RoboCup Standard Platform League

    PubMed Central

    Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.

    2013-01-01

    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098

  2. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  3. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  4. Designing a Robot for Cultural Brokering in Education

    ERIC Educational Resources Information Center

    Kim, Yanghee

    2016-01-01

    The increasing number of English language learning children in U.S. classrooms and the need for effective programs that support these children present a great challenge to the current educational paradigm. The challenge may be met, at least in part, by an innovative humanoid robot serving as a cultural broker that mediates collaborative…

  5. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface

    PubMed Central

    Kim, Youngmoo E.

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712

  6. Robots testing robots: ALAN-Arm, a humanoid arm for the testing of robotic rehabilitation systems.

    PubMed

    Brookes, Jack; Kuznecovs, Maksims; Kanakis, Menelaos; Grigals, Arturs; Narvidas, Mazvydas; Gallagher, Justin; Levesley, Martin

    2017-07-01

    Robotics is increasing in popularity as a method of providing rich, personalized and cost-effective physiotherapy to individuals with some degree of upper limb paralysis, such as those who have suffered a stroke. These robotic rehabilitation systems are often high powered, and exoskeletal systems can attach to the person in a restrictive manner. Therefore, ensuring the mechanical safety of these devices before they come in contact with individuals is a priority. Additionally, rehabilitation systems may use novel sensor systems to measure current arm position. Used to capture and assess patient movements, these first need to be verified for accuracy by an external system. We present the ALAN-Arm, a humanoid robotic arm designed to be used for both accuracy benchmarking and safety testing of robotic rehabilitation systems. The system can be attached to a rehabilitation device and then replay generated or human movement trajectories, as well as autonomously play rehabilitation games or activities. Tests of the ALAN-Arm indicated it could recreate the path of a generated slow movement path with a maximum error of 14.2mm (mean = 5.8mm) and perform cyclic movements up to 0.6Hz with low gain (<1.5dB). Replaying human data trajectories showed the ability to largely preserve human movement characteristics with slightly higher path length and lower normalised jerk.

  7. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface.

    PubMed

    Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.

  8. Agile and dexterous robot for inspection and EOD operations

    NASA Astrophysics Data System (ADS)

    Handelman, David A.; Franken, Gordon H.; Komsuoglu, Haldun

    2010-04-01

    The All-Terrain Biped (ATB) robot is an unmanned ground vehicle with arms, legs and wheels designed to drive, crawl, walk and manipulate objects for inspection and explosive ordnance disposal tasks. This paper summarizes on-going development of the ATB platform. Control technology for semi-autonomous legged mobility and dual-arm dexterity is described as well as preliminary simulation and hardware test results. Performance goals include driving on flat terrain, crawling on steep terrain, walking on stairs, opening doors and grasping objects. Anticipated benefits of the adaptive mobility and dexterity of the ATB platform include increased robot agility and autonomy for EOD operations, reduced operator workload and reduced operator training and skill requirements.

  9. Effect of feedback from a socially interactive humanoid robot on reaching kinematics in children with and without cerebral palsy: A pilot study.

    PubMed

    Chen, Yuping; Garcia-Vergara, Sergio; Howard, Ayanna M

    2017-08-17

    To examine whether children with or without cerebral palsy (CP) would follow a humanoid robot's (i.e., Darwin) feedback to move their arm faster when playing virtual reality (VR) games. Seven children with mild CP and 10 able-bodied children participated. Real-time reaching was evaluated by playing the Super Pop VR TM system, including 2-game baseline, 3-game acquisition, and another 2-game extinction. During acquisition, Darwin provided verbal feedback to direct the child to reach a kinematically defined target goal (i.e., 80% of average movement time in baseline). Outcome variables included the percentage of successful reaches ("% successful reaches"), movement time (MT), average speed, path, and number of movement units. All games during acquisition and extinction had larger "%successful reaches," faster speeds, and faster MTs than the 2 games during baseline (p < .05). Children with and without CP could follow the robot's feedback for changing their reaching kinematics when playing VR games.

  10. Kinematically stable bipedal locomotion using ionic polymer-metal composite actuators

    NASA Astrophysics Data System (ADS)

    Hosseinipour, Milad; Elahinia, Mohammad

    2013-08-01

    Ionic conducting polymer-metal composites (abbreviated as IPMCs) are interesting actuators that can act as artificial muscles in robotic and microelectromechanical systems. Various black or gray box models have modeled the electrochemical-mechanical behavior of these materials. In this study, the governing partial differential equation of the behavior of IPMCs is solved using finite element methods to find the critical actuation parameters, such as strain distribution, maximum strain, and response time. One-dimensional results of the FEM solution are then extended to 2D to find the tip displacement of a flap actuator and experimentally verified. A model of a seven-degree-of-freedom biped robot, actuated by IPMC flaps, is then introduced. The possibility of fast and stable bipedal locomotion using IPMC artificial muscles is the main motivation of this study. Considering the actuator limits, joint path trajectories are generated to achieve a fast and smooth motion. The stability of the proposed gait is then evaluated using the ZMP criterion and motion simulation. The fabrication parameters of each actuator, such as length, platinum plating thickness and installation angle, are then determined using the generated trajectories. A discussion on future studies on force-torque generation of IPMCs for biped locomotion concludes this paper.

  11. Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming

    NASA Astrophysics Data System (ADS)

    Hubicki, Christian; Goldman, Daniel; Ames, Aaron

    In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.

  12. How Developmental Psychology and Robotics Complement Each Other

    DTIC Science & Technology

    2000-01-01

    Breazeal, Marjanovic , Scassellati & Williamson), and a system for regulating interaction intensities (Breazeal & Scassellati) have also been implemented...have been previously reported (Scassellati; Scas- sellati; Brooks, Breazeal, Marjanovic , Scassellati & Williamson; Marjanovic et al.; Brooks, (Ferrell... Marjanovic , M., Scassel- lati, B. & Williamson, M. M. (1999), The Cog Project: Building a Humanoid Robot, in C. L. Nehaniv, ed., `Computation for

  13. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  14. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  15. State Estimation for Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    21 2.2.1 Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . 21 2.2.2 Planar Five-link Model...Linear Inverted Pendulum Model. LVDT Linear Variable Differential Transformers. MEMS Microelectromechanical Systems. MHE Moving Horizon Estimator. QP...

  16. iss031e148737

    NASA Image and Video Library

    2012-06-27

    ISS031-E-148737 (27 June 2012) --- European Space Agency astronaut Andre Kuipers, Expedition 31 flight engineer, poses for a photo with Robonaut 2 humanoid robot in the Destiny laboratory of the International Space Station.

  17. Reverse control for humanoid robot task recognition.

    PubMed

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  18. Pleasant to the Touch: By Emulating Nature, Scientists Hope to Find Innovative New Uses for Soft Robotics in Health-Care Technology.

    PubMed

    Cianchetti, Matteo; Laschi, Cecilia

    2016-01-01

    Open your Internet browser and search for videos showing the most advanced humanoid robots. Look at how they move and walk. Observe their motion and their interaction with the environment (the ground, users, target objects). Now, search for a video of your favorite sports player. Despite the undoubtedly great achievements of modern robotics, it will become quite evident that a lot of work still remains.

  19. A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict

    ERIC Educational Resources Information Center

    Mazzoni, Elvis; Benvenuti, Martina

    2015-01-01

    This paper presents an exploratory study in which a humanoid robot (MecWilly) acted as a partner to preschool children, helping them to learn English words. In order to use the Socio-Cognitive Conflict paradigm to induce the knowledge acquisition process, we designed a playful activity in which children worked in pairs with another child or with…

  20. Robonaut Mobile Autonomy: Initial Experiments

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.

    2006-01-01

    A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  1. Humanoid robot Lola: design and walking control.

    PubMed

    Buschmann, Thomas; Lohmeier, Sebastian; Ulbrich, Heinz

    2009-01-01

    In this paper we present the humanoid robot LOLA, its mechatronic hardware design, simulation and real-time walking control. The goal of the LOLA-project is to build a machine capable of stable, autonomous, fast and human-like walking. LOLA is characterized by a redundant kinematic configuration with 7-DoF legs, an extremely lightweight design, joint actuators with brushless motors and an electronics architecture using decentralized joint control. Special emphasis was put on an improved mass distribution of the legs to achieve good dynamic performance. Trajectory generation and control aim at faster, more flexible and robust walking. Center of mass trajectories are calculated in real-time from footstep locations using quadratic programming and spline collocation methods. Stabilizing control uses hybrid position/force control in task space with an inner joint position control loop. Inertial stabilization is achieved by modifying the contact force trajectories.

  2. Slow walking model for children with multiple disabilities via an application of humanoid robot

    NASA Astrophysics Data System (ADS)

    Wang, ZeFeng; Peyrodie, Laurent; Cao, Hua; Agnani, Olivier; Watelain, Eric; Wang, HaoPing

    2016-02-01

    Walk training research with children having multiple disabilities is presented. Orthosis aid in walking for children with multiple disabilities such as Cerebral Palsy continues to be a clinical and technological challenge. In order to reduce pain and improve treatment strategies, an intermediate structure - humanoid robot NAO - is proposed as an assay platform to study walking training models, to be transferred to future special exoskeletons for children. A suitable and stable walking model is proposed for walk training. It would be simulated and tested on NAO. This comparative study of zero moment point (ZMP) supports polygons and energy consumption validates the model as more stable than the conventional NAO. Accordingly direction variation of the center of mass and the slopes of linear regression knee/ankle angles, the Slow Walk model faithfully emulates the gait pattern of children.

  3. Dexterous Humanoid Robotic Wrist

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Reich, David M. (Inventor); Wampler, II, Charles W. (Inventor); Askew, Scott R. (Inventor); Diftler, Myron A. (Inventor); Nguyen, Vienny (Inventor)

    2013-01-01

    A humanoid robot includes a torso, a pair of arms, a neck, a head, a wrist joint assembly, and a control system. The arms and the neck movably extend from the torso. Each of the arms includes a lower arm and a hand that is rotatable relative to the lower arm. The wrist joint assembly is operatively defined between the lower arm and the hand. The wrist joint assembly includes a yaw axis and a pitch axis. The pitch axis is disposed in a spaced relationship to the yaw axis such that the axes are generally perpendicular. The pitch axis extends between the yaw axis and the lower arm. The hand is rotatable relative to the lower arm about each of the yaw axis and the pitch axis. The control system is configured for determining a yaw angle and a pitch angle of the wrist joint assembly.

  4. A neural framework for organization and flexible utilization of episodic memory in cumulatively learning baby humanoids.

    PubMed

    Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2014-12-01

    Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one's past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the "now," multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.

  5. Emotion attribution to a non-humanoid robot in different social situations.

    PubMed

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour ("happiness" and "fear"), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.

  6. Emotion Attribution to a Non-Humanoid Robot in Different Social Situations

    PubMed Central

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot. PMID:25551218

  7. Pettit enters data in a laptop computer

    NASA Image and Video Library

    2012-03-13

    ISS030-E-142862 (13 March 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, enters data in a computer while working with Robonaut 2 humanoid robot in the Destiny laboratory of the International Space Station.

  8. Modelling gait transition in two-legged animals

    NASA Astrophysics Data System (ADS)

    Pinto, Carla M. A.; Santos, Alexandra P.

    2011-12-01

    The study of locomotor patterns has been a major research goal in the last decades. Understanding how intralimb and interlimb coordination works out so well in animals' locomotion is a hard and challenging task. Many models have been proposed to model animal's rhythms. These models have also been applied to the control of rhythmic movements of adaptive legged robots, namely biped, quadruped and other designs. In this paper we study gait transition in a central pattern generator (CPG) model for bipeds, the 4-cells model. This model is proposed by Golubitsky, Stewart, Buono and Collins and is studied further by Pinto and Golubitsky. We briefly resume the work done by Pinto and Golubitsky. We compute numerically gait transition in the 4-cells CPG model for bipeds. We use Morris-Lecar equations and Wilson-Cowan equations as the internal dynamics for each cell. We also consider two types of coupling between the cells: diffusive and synaptic. We obtain secondary gaits by bifurcation of primary gaits, by varying the coupling strengths. Nevertheless, some bifurcating branches could not be obtained, emphasizing the fact that despite analytically those bifurcations exist, finding them is a hard task and requires variation of other parameters of the equations. We note that the type of coupling did not influence the results.

  9. Investigating Models of Social Development Using a Humanoid Robot

    DTIC Science & Technology

    1998-01-01

    robot interaction and cooper- and neural models of spinal motor neurons (Williamson ation (Takanishi, Hirano & Sato 1998, Morita, Shibuya 1996...etiology and behavioral manifestations of pervasive de- Individuals with autism tend to have normal sensory velopmental disorders such as autism and...grasp the implications of this information. Wlile interested in joint attention both as an explanation the deficits of autism certainly cover many other

  10. Robonaut 2 - The First Humanoid Robot in Space

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Radford, N. A.; Mehling, J. S.; Abdallah, M. E.; Bridgwater, L. B.; Sanders, A. M.; Askew, R. S.; Linn, D. M.; Yamokoski, J. D.; Permenter, F. A.; hide

    2010-01-01

    NASA and General Motors have developed the second generation Robonaut, Robonaut 2 or R2, and it is scheduled to arrive on the International Space Station in late 2010 and undergo initial testing in early 2011. This state of the art, dexterous, anthropomorphic robotic torso has significant technical improvements over its predecessor making it a far more valuable tool for astronauts. Upgrades include: increased force sensing, greater range of motion, higher bandwidth and improved dexterity. R2 s integrated mechatronics design results in a more compact and robust distributed control system with a faction of the wiring of the original Robonaut. Modularity is prevalent throughout the hardware and software along with innovative and layered approaches for sensing and control. The most important aspects of the Robonaut philosophy are clearly present in this latest model s ability to allow comfortable human interaction and in its design to perform significant work using the same hardware and interfaces used by people. The following describes the mechanisms, integrated electronics, control strategies and user interface that make R2 a promising addition to the Space Station and other environments where humanoid robots can assist people.

  11. Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia

    PubMed Central

    Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel

    2017-01-01

    The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a “control software” for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction. PMID:28713296

  12. Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia.

    PubMed

    Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel

    2017-01-01

    The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a "control software" for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction.

  13. Controlling legs for locomotion-insights from robotics and neurobiology.

    PubMed

    Buschmann, Thomas; Ewald, Alexander; von Twickel, Arndt; Büschges, Ansgar

    2015-06-29

    Walking is the most common terrestrial form of locomotion in animals. Its great versatility and flexibility has led to many attempts at building walking machines with similar capabilities. The control of walking is an active research area both in neurobiology and robotics, with a large and growing body of work. This paper gives an overview of the current knowledge on the control of legged locomotion in animals and machines and attempts to give walking control researchers from biology and robotics an overview of the current knowledge in both fields. We try to summarize the knowledge on the neurobiological basis of walking control in animals, emphasizing common principles seen in different species. In a section on walking robots, we review common approaches to walking controller design with a slight emphasis on biped walking control. We show where parallels between robotic and neurobiological walking controllers exist and how robotics and biology may benefit from each other. Finally, we discuss where research in the two fields diverges and suggest ways to bridge these gaps.

  14. Multi-function robots with speech interaction and emotion feedback

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Lou, Guanting; Ma, Mengchao

    2018-03-01

    Nowadays, the service robots have been applied in many public circumstances; however, most of them still don’t have the function of speech interaction, especially the function of speech-emotion interaction feedback. To make the robot more humanoid, Arduino microcontroller was used in this study for the speech recognition module and servo motor control module to achieve the functions of the robot’s speech interaction and emotion feedback. In addition, W5100 was adopted for network connection to achieve information transmission via Internet, providing broad application prospects for the robot in the area of Internet of Things (IoT).

  15. Biomimetic shoulder complex based on 3-PSS/S spherical parallel mechanism

    NASA Astrophysics Data System (ADS)

    Hou, Yulei; Hu, Xinzhe; Zeng, Daxing; Zhou, Yulin

    2015-01-01

    The application of the parallel mechanism is still limited in the humanoid robot fields, and the existing parallel humanoid robot joint has not yet been reflected the characteristics of the parallel mechanism completely, also failed to solve the problem, such as small workspace, effectively. From the structural and functional bionic point of view, a three degrees of freedom(DOFs) spherical parallel mechanism for the shoulder complex of the humanoid robot is presented. According to the structure and kinetic characteristics analysis of the human shoulder complex, 3-PSS/S(P for prismatic pair, S for spherical pair) is chosen as the original configuration for the shouder complex. Using genetic algorithm, the optimization of the 3-PSS/S spherical parallel mechanism is performed, and the orientation workspace of the prototype mechanism is enlarged obviously. Combining the practical structure characteristics of the human shouder complex, an offset output mode, which means the output rod of the mechanism turn to any direction at the point a certain distance from the rotation center of the mechanism, is put forward, which provide possibility for the consistent of the workspace of the mechanism and the actual motion space of the human body shoulder joint. The relationship of the attitude angles between different coordinate system is derived, which establishs the foundation for the motion descriptions under different conditions and control development. The 3-PSS/S spherical parallel mechanism is proposed for the shoulder complex, and the consistence of the workspace of the mechanism and the human shoulder complex is realized by the stuctural parameter optimization and the offset output design.

  16. Concurrent Path Planning with One or More Humanoid Robots

    NASA Technical Reports Server (NTRS)

    Reiland, Matthew J. (Inventor); Sanders, Adam M. (Inventor)

    2014-01-01

    A robotic system includes a controller and one or more robots each having a plurality of robotic joints. Each of the robotic joints is independently controllable to thereby execute a cooperative work task having at least one task execution fork, leading to multiple independent subtasks. The controller coordinates motion of the robot(s) during execution of the cooperative work task. The controller groups the robotic joints into task-specific robotic subsystems, and synchronizes motion of different subsystems during execution of the various subtasks of the cooperative work task. A method for executing the cooperative work task using the robotic system includes automatically grouping the robotic joints into task-specific subsystems, and assigning subtasks of the cooperative work task to the subsystems upon reaching a task execution fork. The method further includes coordinating execution of the subtasks after reaching the task execution fork.

  17. Balance Maintenance in High-Speed Motion of Humanoid Robot Arm-Based on the 6D Constraints of Momentum Change Rate

    PubMed Central

    Zhang, Da-song; Chu, Jian

    2014-01-01

    Based on the 6D constraints of momentum change rate (CMCR), this paper puts forward a real-time and full balance maintenance method for the humanoid robot during high-speed movement of its 7-DOF arm. First, the total momentum formula for the robot's two arms is given and the momentum change rate is defined by the time derivative of the total momentum. The author also illustrates the idea of full balance maintenance and analyzes the physical meaning of 6D CMCR and its fundamental relation to full balance maintenance. Moreover, discretization and optimization solution of CMCR has been provided with the motion constraint of the auxiliary arm's joint, and the solving algorithm is optimized. The simulation results have shown the validity and generality of the proposed method on the full balance maintenance in the 6 DOFs of the robot body under 6D CMCR. This method ensures 6D dynamics balance performance and increases abundant ZMP stability margin. The resulting motion of the auxiliary arm has large abundance in joint space, and the angular velocity and the angular acceleration of these joints lie within the predefined limits. The proposed algorithm also has good real-time performance. PMID:24883404

  18. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots

    PubMed Central

    Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736

  19. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots.

    PubMed

    Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

  20. Acquisition of Basic Behaviors through Teleoperation using Robonaut

    NASA Technical Reports Server (NTRS)

    Campbell, Christina

    2004-01-01

    My area of research is in artificial intelligence and robotics. The major platform of this research is NASA's Robonaut. This humanoid robot is located at the Johnson Space Center. Prior to receiving this grant, I was able to spend two summers in Houston working with the Robonaut team, which is headed by Rob Ambrose. My work centered on teaching Robonaut to grasp a wrench based on data gathered as a human teleoperated the robot. I tried to make the procedure as general as possible so that many different motions could be taught using this method.

  1. Achieving Collaborative Interaction with a Humanoid Robot

    DTIC Science & Technology

    2003-01-01

    gestures will become more prevalent in the kinds of interactions we study. Gesturing is a natural part of human- human communication . It...to human communication . However, in human to human experiments, Tversky et al. observed a similar result and found that speakers took the

  2. Developmental Approach for Behavior Learning Using Primitive Motion Skills.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2018-05-01

    Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.

  3. Reducing children's pain and distress towards flu vaccinations: a novel and effective application of humanoid robotics.

    PubMed

    Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan

    2013-06-07

    Millions of children in North America receive an annual flu vaccination, many of whom are at risk of experiencing severe distress. Millions of children also use technologically advanced devices such as computers and cell phones. Based on this familiarity, we introduced another sophisticated device - a humanoid robot - to interact with children during their vaccination. We hypothesized that these children would experience less pain and distress than children who did not have this interaction. This was a randomized controlled study in which 57 children (30 male; age, mean±SD: 6.87±1.34 years) were randomly assigned to a vaccination session with a nurse who used standard administration procedures, or with a robot who was programmed to use cognitive-behavioral strategies with them while a nurse administered the vaccination. Measures of pain and distress were completed by children, parents, nurses, and researchers. Multivariate analyses of variance indicated that interaction with a robot during flu vaccination resulted in significantly less pain and distress in children according to parent, child, nurse, and researcher ratings with effect sizes in the moderate to high range (Cohen's d=0.49-0.90). This is the first study to examine the effectiveness of child-robot interaction for reducing children's pain and distress during a medical procedure. All measures of reduction were significant. These findings suggest that further research on robotics at the bedside is warranted to determine how they can effectively help children manage painful medical procedures. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  4. Developing Humanoid Robots for Real-World Environments

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Kuhlman, Michael; Assad, Chris; Keymeulen, Didier

    2008-01-01

    Humanoids are steadily improving in appearance and functionality demonstrated in controlled environments. To address the challenges of operation in the real-world, researchers have proposed the use of brain-inspired architectures for robot control, and the use of robot learning techniques that enable the robot to acquire and tune skills and behaviours. In the first part of the paper we introduce new concepts and results in these two areas. First, we present a cerebellum-inspired model that demonstrated efficiency in the sensory-motor control of anthropomorphic arms, and in gait control of dynamic walkers. Then, we present a set of new ideas related to robot learning, emphasizing the importance of developing teaching techniques that support learning. In the second part of the paper we propose the use in robotics of the iterative and incremental development methodologies, in the context of practical task-oriented applications. These methodologies promise to rapidly reach system-level integration, and to early identify system-level weaknesses to focus on. We apply this methodology in a task targeting the automated assembly of a modular structure using HOAP-2. We confirm this approach led to rapid development of a end-to-end capability, and offered guidance on which technologies to focus on for gradual improvement of a complete functional system. It is believed that providing Grand Challenge type milestones in practical task-oriented applications accelerates development. As a meaningful target in short-mid term we propose the 'IKEA Challenge', aimed at the demonstration of autonomous assembly of various pieces of furniture, from the box, following included written/drawn instructions.

  5. An affordable compact humanoid robot for Autism Spectrum Disorder interventions in children.

    PubMed

    Dickstein-Fischer, Laurie; Alexander, Elizabeth; Yan, Xiaoan; Su, Hao; Harrington, Kevin; Fischer, Gregory S

    2011-01-01

    Autism Spectrum Disorder impacts an ever-increasing number of children. The disorder is marked by social functioning that is characterized by impairment in the use of nonverbal behaviors, failure to develop appropriate peer relationships and lack of social and emotional exchanges. Providing early intervention through the modality of play therapy has been effective in improving behavioral and social outcomes for children with autism. Interacting with humanoid robots that provide simple emotional response and interaction has been shown to improve the communication skills of autistic children. In particular, early intervention and continuous care provide significantly better outcomes. Currently, there are no robots capable of meeting these requirements that are both low-cost and available to families of autistic children for in-home use. This paper proposes the piloting the use of robotics as an improved diagnostic and early intervention tool for autistic children that is affordable, non-threatening, durable, and capable of interacting with an autistic child. This robot has the ability to track the child with its 3 degree of freedom (DOF) eyes and 3-DOF head, open and close its 1-DOF beak and 1-DOF each eyelids, raise its 1-DOF each wings, play sound, and record sound. These attributes will give it the ability to be used for the diagnosis and treatment of autism. As part of this project, the robot and the electronic and control software have been developed, and integrating semi-autonomous interaction, teleoperation from a remote healthcare provider and initiating trials with children in a local clinic are in progress.

  6. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study

    PubMed Central

    Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna

    2018-01-01

    Background Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. Objective The aim of this study was to explore participants’ qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. Methods NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot’s head sensor. Evaluations were content-analyzed utilizing Boyatzis’ steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Results Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Conclusions Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. PMID:29724701

  7. The use of new technologies for nutritional education in primary schools: a pilot study.

    PubMed

    Rosi, A; Dall'Asta, M; Brighenti, F; Del Rio, D; Volta, E; Baroni, I; Nalin, M; Coti Zelati, M; Sanna, A; Scazzina, F

    2016-11-01

    The aim of this study was evaluating if the presence of a humanoid robot could improve the efficacy of a game-based, nutritional education intervention. This was a controlled, school-based pilot intervention carried out on fourth-grade school children (8-10 years old). A total of 112 children underwent a game-based nutritional educational lesson on the importance of carbohydrates. For one group (n = 58), the lesson was carried out by a nutritional educator, the Master of Taste (MT), whereas for another group, (n = 54) the Master of Taste was supported by a humanoid robot (MT + NAO). A third group of children (n = 33) served as control not receiving any lesson. The intervention efficacy was evaluated by questionnaires administered at the beginning and at the end of each intervention. The nutritional knowledge level was evaluated by the cultural-nutritional awareness factor (AF) score. A total of 290 questionnaires were analyzed. Both MT and MT + NAO interventions significantly increased nutritional knowledge. At the end of the study, children in the MT and MT + NAO group showed similar AF scores, and the AF scores of both intervention groups were significantly higher than the AF score of the control group. This study showed a significant increase in the nutritional knowledge of children involved in a game-based, single-lesson, educational intervention performed by a figure that has a background in food science. However, the presence of a humanoid robot to support this figure's teaching activity did not result in any significant learning improvement. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  8. Keep focussing: striatal dopamine multiple functions resolved in a single mechanism tested in a simulated humanoid robot

    PubMed Central

    Fiore, Vincenzo G.; Sperati, Valerio; Mannella, Francesco; Mirolli, Marco; Gurney, Kevin; Friston, Karl; Dolan, Raymond J.; Baldassarre, Gianluca

    2014-01-01

    The effects of striatal dopamine (DA) on behavior have been widely investigated over the past decades, with “phasic” burst firings considered as the key expression of a reward prediction error responsible for reinforcement learning. Less well studied is “tonic” DA, where putative functions include the idea that it is a regulator of vigor, incentive salience, disposition to exert an effort and a modulator of approach strategies. We present a model combining tonic and phasic DA to show how different outflows triggered by either intrinsically or extrinsically motivating stimuli dynamically affect the basal ganglia by impacting on a selection process this system performs on its cortical input. The model, which has been tested on the simulated humanoid robot iCub interacting with a mechatronic board, shows the putative functions ascribed to DA emerging from the combination of a standard computational mechanism coupled to a differential sensitivity to the presence of DA across the striatum. PMID:24600422

  9. Improving Cognitive Skills of the Industrial Robot

    NASA Astrophysics Data System (ADS)

    Bezák, Pavol

    2015-08-01

    At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.

  10. Project M: An Assessment of Mission Assumptions

    NASA Technical Reports Server (NTRS)

    Edwards, Alycia

    2010-01-01

    Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft

  11. Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD.

    PubMed

    Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan

    2015-12-01

    Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and feedback. The system was designed to incorporate both a humanoid robot and a human examiner. We compared child performance within system across these conditions in a sample of preschool children with ASD (n = 8) and a control sample of typically developing children (n = 8). The system was well-tolerated in the sample, children with ASD exhibited greater attention to the robotic system than the human administrator, and for children with ASD imitation performance appeared superior during the robotic interaction.

  12. Small-Group Technology-Assisted Instruction: Virtual Teacher and Robot Peer for Individuals with Autism Spectrum Disorder.

    PubMed

    Saadatzi, Mohammad Nasser; Pennington, Robert C; Welch, Karla C; Graham, James H

    2018-06-20

    The authors combined virtual reality technology and social robotics to develop a tutoring system that resembled a small-group arrangement. This tutoring system featured a virtual teacher instructing sight words, and included a humanoid robot emulating a peer. The authors used a multiple-probe design across word sets to evaluate the effects of the instructional package on the explicit acquisition and vicarious learning of sight words instructed to three children with autism spectrum disorder (ASD) and the robot peer. Results indicated that participants acquired, maintained, and generalized 100% of the words explicitly instructed to them, made fewer errors while learning the words common between them and the robot peer, and vicariously learned 94% of the words solely instructed to the robot.

  13. Adaptive, fast walking in a biped robot under neuronal control and learning.

    PubMed

    Manoonpong, Poramate; Geng, Tao; Kulvicius, Tomas; Porr, Bernd; Wörgötter, Florentin

    2007-07-01

    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.

  14. FE Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013708 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  15. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013710 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  16. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013714 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  17. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013712 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  18. KSC-2010-4382

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, a robotics engineer animates the dexterous humanoid astronaut helper, Robonaut (R2) for the participants at a media event hosted by NASA. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  19. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031125 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  20. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031124 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  1. Robonaut 2 in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-02

    ISS034-E-013990 (2 Jan. 2013) --- In the International Space Station’s Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  2. Trotting, pacing and bounding by a quadruped robot.

    PubMed

    Raibert, M H

    1990-01-01

    This paper explores the quadruped running gaits that use the legs in pairs: the trot (diagonal pairs), the pace (lateral pairs), and the bound (front and rear pairs). Rather than study these gaits in quadruped animals, we studied them in a quadruped robot. We found that each of the gaits that use the legs in pairs can be transformed into a common underlying gait, a virtual biped gait. Once transformed, a single set of control algorithms produce all three gaits, with modest parameter variations between them. The control algorithms manipulated rebound height, running speed, and body attitude, while a low-level mechanism coordinated the behavior of the legs in each pair. The approach was tested with laboratory experiments on a four-legged robot. Data are presented that show the details of the running motion for the three gaits and for transitions from one gait to another.

  3. Generalisation, decision making, and embodiment effects in mental rotation: A neurorobotic architecture tested with a humanoid robot.

    PubMed

    Seepanomwan, Kristsana; Caligiore, Daniele; Cangelosi, Angelo; Baldassarre, Gianluca

    2015-12-01

    Mental rotation, a classic experimental paradigm of cognitive psychology, tests the capacity of humans to mentally rotate a seen object to decide if it matches a target object. In recent years, mental rotation has been investigated with brain imaging techniques to identify the brain areas involved. Mental rotation has also been investigated through the development of neural-network models, used to identify the specific mechanisms that underlie its process, and with neurorobotics models to investigate its embodied nature. Current models, however, have limited capacities to relate to neuro-scientific evidence, to generalise mental rotation to new objects, to suitably represent decision making mechanisms, and to allow the study of the effects of overt gestures on mental rotation. The work presented in this study overcomes these limitations by proposing a novel neurorobotic model that has a macro-architecture constrained by knowledge held on brain, encompasses a rather general mental rotation mechanism, and incorporates a biologically plausible decision making mechanism. The model was tested using the humanoid robot iCub in tasks requiring the robot to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot gained an enhanced capacity to generalise mental rotation to new objects and to express the possible effects of overt movements of the wrist on mental rotation. The model also represents a further step in the identification of the embodied neural mechanisms that may underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study.

    PubMed

    Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna; Andrade, Jackie

    2018-05-03

    Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. The aim of this study was to explore participants' qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot's head sensor. Evaluations were content-analyzed utilizing Boyatzis' steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. ©Joana Galvão Gomes da Silva, David J Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, Jackie Andrade. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.05.2018.

  5. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot.

    PubMed

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking.

  6. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot

    PubMed Central

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking. PMID:28951719

  7. Interactive language learning by robots: the transition from babbling to word forms.

    PubMed

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.

  8. Design and control of five fingered under-actuated robotic hand

    NASA Astrophysics Data System (ADS)

    Sahoo, Biswojit; Parida, Pramod Kumar

    2018-04-01

    Now a day's research regarding humanoid robots and its application in different fields (industry, household, rehabilitation and exploratory) is going on entire the globe. Among which a challenging topic is to design a dexterous robotic hand which not only can perform as a hand of a robot but also can be used in re habilitation. The basic key concern is a dexterous robot hand which can be able to mimic the function of biological hand to perform different operations. This thesis work is regarding design and control of a under-actuated robotic hand consisting of four under actuated fingers (index finger, middle finger, little finger and ring finger ) , a thumb and a dexterous palm which can copy the motions and grasp type of human hand which having 21degrees of freedom instead of 25Degree Of Freedom.

  9. What makes a robot 'social'?

    PubMed

    Jones, Raya A

    2017-08-01

    Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.

  10. Utilization of the NASA Robonaut as a Surgical Avatar in Telemedicine

    NASA Technical Reports Server (NTRS)

    Dean, Marc; Diftler, Myron

    2015-01-01

    The concept of teleoperated robotic surgery is not new; however, most of the work to date has utilized specialized robots designed for specific set of surgeries. This activity explores the use of a humanoid robot to perform surgical procedures using the same hand held instruments that a human surgeon employs. For this effort, the tele-operated Robonaut (R2) was selected due to its dexterity, its ability to perform a wide range of tasks, and its adaptability to changing environments. To evaluate this concept, a series of challenges was designed with the goal of assessing the feasibility of utilizing Robonaut as a telemedicine based surgical avatar.

  11. Development of autonomous eating mechanism for biomimetic robots

    NASA Astrophysics Data System (ADS)

    Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung

    2005-12-01

    Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.

  12. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    PubMed

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer

    PubMed Central

    Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan

    2012-01-01

    Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853

  14. Empowering Student Voice through Interactive Design and Digital Making

    ERIC Educational Resources Information Center

    Kim, Yanghee; Searle, Kristin

    2017-01-01

    Over the last two decades online technology and digital media have provided space for students to participate and express their voices. This paper further explores how new digital technologies, such as humanoid robots and wearable electronics, can be used to offer additional spaces where students' voices are heard. In these spaces, young students…

  15. An Algorithm for Pedestrian Detection in Multispectral Image Sequences

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.

    2017-05-01

    The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.

  16. Minimal feedback to a rhythm generator improves the robustness to slope variations of a compass biped.

    PubMed

    Spitz, Jonathan; Evstrachin, Alexandrina; Zacksenhouse, Miriam

    2015-08-20

    In recent years there has been a growing interest in the field of dynamic walking and bio-inspired robots. However, while walking and running on a flat surface have been studied extensively, walking dynamically over terrains with varying slope remains a challenge. Previously we developed an open loop controller based on a central pattern generator (CPG). The controller applied predefined torque patterns to a compass-gait biped, and achieved stable gaits over a limited range of slopes. In this work, this range is greatly extended by applying a once per cycle feedback to the CPG controller. The terrain's slope is measured and used to modify both the CPG frequency and the torque amplitude once per step. A multi-objective optimization algorithm was used to tune the controller parameters for a simulated CB model. The resulting controller successfully traverses terrains with slopes ranging from +7° to -8°, comparable to most slopes found in human constructed environments. Gait stability was verified by computing the linearized Poincaré Map both numerically and analytically.

  17. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  18. Interacting With Robots to Investigate the Bases of Social Interaction.

    PubMed

    Sciutti, Alessandra; Sandini, Giulio

    2017-12-01

    Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.

  19. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  20. A feasibility study on the design and walking operation of a biped locomotor via dynamic simulation

    NASA Astrophysics Data System (ADS)

    Wang, Mingfeng; Ceccarelli, Marco; Carbone, Giuseppe

    2016-06-01

    A feasibility study on the mechanical design and walking operation of a Cassino biped locomotor is presented in this paper. The biped locomotor consists of two identical 3 degrees-of-freedom tripod leg mechanisms with a parallel manipulator architecture. Planning of the biped walking gait is performed by coordinating the motions of the two leg mechanisms and waist. A threedimensional model is elaborated in SolidWorks® environment in order to characterize a feasible mechanical design. Dynamic simulation is carried out in MSC.ADAMS® environment with the aims of characterizing and evaluating the dynamic walking performance of the proposed design. Simulation results show that the proposed biped locomotor with proper input motions of linear actuators performs practical and feasible walking on flat surfaces with limited actuation and reaction forces between its feet and the ground. A preliminary prototype of the biped locomotor is built for the purpose of evaluating the operation performance of the biped walking gait of the proposed locomotor.

  1. Measuring empathy for human and robot hand pain using electroencephalography.

    PubMed

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  2. Improving Grasp Skills Using Schema Structured Learning

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Grupen, ROderic A.; Fagg, Andrew H.

    2006-01-01

    Abstract In the control-based approach to robotics, complex behavior is created by sequencing and combining control primitives. While it is desirable for the robot to autonomously learn the correct control sequence, searching through the large number of potential solutions can be time consuming. This paper constrains this search to variations of a generalized solution encoded in a framework known as an action schema. A new algorithm, SCHEMA STRUCTURED LEARNING, is proposed that repeatedly executes variations of the generalized solution in search of instantiations that satisfy action schema objectives. This approach is tested in a grasping task where Dexter, the UMass humanoid robot, learns which reaching and grasping controllers maximize the probability of grasp success.

  3. Application of ultrasonic sensor for measuring distances in robotics

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  4. Robonaut: a robot designed to work with humans in space

    NASA Technical Reports Server (NTRS)

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  5. Robonaut: a robot designed to work with humans in space.

    PubMed

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  6. We're in This Together: Intentional Design of Social Relationships with AIED Systems

    ERIC Educational Resources Information Center

    Walker, Erin; Ogan, Amy

    2016-01-01

    Students' relationships with their peers, teachers, and communities influence the ways in which they approach learning activities and the degree to which they benefit from them. Learning technologies, ranging from humanoid robots to text-based prompts on a computer screen, have a similar social influence on students. We envision a future in which…

  7. Pedagogical and Technological Augmentation of Mobile Learning for Young Children Interactive Learning Environments

    ERIC Educational Resources Information Center

    Kim, Yanghee; Smith, Diantha

    2017-01-01

    The ubiquity and educational potential of mobile applications are well acknowledged. This paper proposes six theory-based, pedagogical strategies to guide interaction design of mobile apps for young children. Also, to augment the capabilities of mobile devices, we used a humanoid robot integrated with a smartphone and developed an English-learning…

  8. Using a Humanoid Robot to Develop a Dialogue-Based Interactive Learning Environment for Elementary Foreign Language Classrooms

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Chen, Gwo-Dong

    2010-01-01

    Elementary school is the critical stage during which the development of listening comprehension and oral abilities in language acquisition occur, especially with a foreign language. However, the current foreign language instructors often adopt one-way teaching, and the learning environment lacks any interactive instructional media with which to…

  9. Curiosity driven reinforcement learning for motion planning on humanoids

    PubMed Central

    Frank, Mikhail; Leitner, Jürgen; Stollenga, Marijn; Förster, Alexander; Schmidhuber, Jürgen

    2014-01-01

    Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment. PMID:24432001

  10. Creating the brain and interacting with the brain: an integrated approach to understanding the brain.

    PubMed

    Morimoto, Jun; Kawato, Mitsuo

    2015-03-06

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  11. Creating the brain and interacting with the brain: an integrated approach to understanding the brain

    PubMed Central

    Morimoto, Jun; Kawato, Mitsuo

    2015-01-01

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568

  12. Stability analysis via the concept of Lyapunov exponents: a case study in optimal controlled biped standing

    NASA Astrophysics Data System (ADS)

    Sun, Yuming; Wu, Christine Qiong

    2012-12-01

    Balancing control is important for biped standing. In spite of large efforts, it is very difficult to design balancing control strategies satisfying three requirements simultaneously: maintaining postural stability, improving energy efficiency and satisfying the constraints between the biped feet and the ground. In this article, a proportional-derivative (PD) controller is proposed for a standing biped, which is simplified as a two-link inverted pendulum with one additional rigid foot-link. The genetic algorithm (GA) is used to search for the control gain meeting all three requirements. The stability analysis of such a deterministic biped control system is carried out using the concept of Lyapunov exponents (LEs), based on which, the system stability, where the disturbance comes from the initial states, and the structural stability, where the disturbance comes from the PD gains, are examined quantitively in terms of stability region. This article contributes to the biped balancing control, more significantly, the method shown in the studied case of biped provides a general framework of systematic stability analysis for certain deterministic nonlinear dynamical systems.

  13. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.

  14. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  15. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  16. Novel Door-opening Method for Six-legged Robots Based on Only Force Sensing

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Jun; Gao, Feng; Pan, Yang

    2017-09-01

    Current door-opening methods are mainly developed on tracked, wheeled and biped robots by applying multi-DOF manipulators and vision systems. However, door-opening methods for six-legged robots are seldom studied, especially using 0-DOF tools to operate and only force sensing to detect. A novel door-opening method for six-legged robots is developed and implemented to the six-parallel-legged robot. The kinematic model of the six-parallel-legged robot is established and the model of measuring the positional relationship between the robot and the door is proposed. The measurement model is completely based on only force sensing. The real-time trajectory planning method and the control strategy are designed. The trajectory planning method allows the maximum angle between the sagittal axis of the robot body and the normal line of the door plane to be 45º. A 0-DOF tool mounted to the robot body is applied to operate. By integrating with the body, the tool has 6 DOFs and enough workspace to operate. The loose grasp achieved by the tool helps release the inner force in the tool. Experiments are carried out to validate the method. The results show that the method is effective and robust in opening doors wider than 1 m. This paper proposes a novel door-opening method for six-legged robots, which notably uses a 0-DOF tool and only force sensing to detect and open the door.

  17. Modeling of R/C Servo Motor and Application to Underactuated Mechanical Systems

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masato; Kitayoshi, Ryohei; Wada, Takashi; Maruta, Ichiro; Sugie, Toshiharu

    An R/C servo motor is a compact package of a DC geard-motor associated with a position servo controller. They are widely used in small-sized robotics and mechatronics by virtue of their compactness, easiness-to-use and high/weight ratio. However, it is crucial to clarify their internal model (including the embedded position servo) in order to improve control performance of mechatronic systems using R/C servo motors, such as biped robots or underactuted sysyems. In this paper, we propose a simple and realistic internal model of the R/C servo motors including the embedded servo controller, and estimate their physical parameters using continuous-time system identification method. We also provide a model of reference-to-torque transfer function so that we can estimate the internal torque acting on the load.

  18. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012573 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  19. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012571 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  20. Six-and-a-Half-Month-Old Children Positively Attribute Goals to Human Action and to Humanoid-Robot Motion

    ERIC Educational Resources Information Center

    Kamewari, K.; Kato, M.; Kanda, T.; Ishiguro, H.; Hiraki, K.

    2005-01-01

    Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation…

  1. Robonaut: A Robotic Astronaut Assistant

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Diftler, Myron A.

    2001-01-01

    NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.

  2. Open source hardware and software platform for robotics and artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  3. Reinforcement learning: Solving two case studies

    NASA Astrophysics Data System (ADS)

    Duarte, Ana Filipa; Silva, Pedro; dos Santos, Cristina Peixoto

    2012-09-01

    Reinforcement Learning algorithms offer interesting features for the control of autonomous systems, such as the ability to learn from direct interaction with the environment, and the use of a simple reward signalas opposed to the input-outputs pairsused in classic supervised learning. The reward signal indicates the success of failure of the actions executed by the agent in the environment. In this work, are described RL algorithmsapplied to two case studies: the Crawler robot and the widely known inverted pendulum. We explore RL capabilities to autonomously learn a basic locomotion pattern in the Crawler, andapproach the balancing problem of biped locomotion using the inverted pendulum.

  4. A survey on dielectric elastomer actuators for soft robots.

    PubMed

    Gu, Guo-Ying; Zhu, Jian; Zhu, Li-Min; Zhu, Xiangyang

    2017-01-23

    Conventional industrial robots with the rigid actuation technology have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing number of robots needing to interact with humans and unstructured environments, there is a need for soft robots capable of sustaining large deformation while inducing little pressure or damage when maneuvering through confined spaces. The emergence of soft robotics offers the prospect of applying soft actuators as artificial muscles in robots, replacing traditional rigid actuators. Dielectric elastomer actuators (DEAs) are recognized as one of the most promising soft actuation technologies due to the facts that: i) dielectric elastomers are kind of soft, motion-generating materials that resemble natural muscle of humans in terms of force, strain (displacement per unit length or area) and actuation pressure/density; ii) dielectric elastomers can produce large voltage-induced deformation. In this survey, we first introduce the so-called DEAs emphasizing the key points of working principle, key components and electromechanical modeling approaches. Then, different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed. Lastly, we summarize the challenges and opportunities for the further studies in terms of mechanism design, dynamics modeling and autonomous control.

  5. Infant and Adult Perceptions of Possible and Impossible Body Movements: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Morita, Tomoyo; Slaughter, Virginia; Katayama, Nobuko; Kitazaki, Michiteru; Kakigi, Ryusuke; Itakura, Shoji

    2012-01-01

    This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N = 14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old…

  6. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    PubMed

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  8. Acquisition of Robotic Giant-swing Motion Using Reinforcement Learning and Its Consideration of Motion Forms

    NASA Astrophysics Data System (ADS)

    Sakai, Naoki; Kawabe, Naoto; Hara, Masayuki; Toyoda, Nozomi; Yabuta, Tetsuro

    This paper argues how a compact humanoid robot can acquire a giant-swing motion without any robotic models by using Q-Learning method. Generally, it is widely said that Q-Learning is not appropriated for learning dynamic motions because Markov property is not necessarily guaranteed during the dynamic task. However, we tried to solve this problem by embedding the angular velocity state into state definition and averaging Q-Learning method to reduce dynamic effects, although there remain non-Markov effects in the learning results. The result shows how the robot can acquire a giant-swing motion by using Q-Learning algorithm. The successful acquired motions are analyzed in the view point of dynamics in order to realize a functionally giant-swing motion. Finally, the result shows how this method can avoid the stagnant action loop at around the bottom of the horizontal bar during the early stage of giant-swing motion.

  9. Archaic man meets a marvellous automaton: posthumanism, social robots, archetypes.

    PubMed

    Jones, Raya

    2017-06-01

    Posthumanism is associated with critical explorations of how new technologies are rewriting our understanding of what it means to be human and how they might alter human existence itself. Intersections with analytical psychology vary depending on which technologies are held in focus. Social robotics promises to populate everyday settings with entities that have populated the imagination for millennia. A legend of A Marvellous Automaton appears as early as 350 B.C. in a book of Taoist teachings, and is joined by ancient and medieval legends of manmade humanoids coming to life, as well as the familiar robots of modern science fiction. However, while the robotics industry seems to be realizing an archetypal fantasy, the technology creates new social realities that generate distinctive issues of potential relevance for the theory and practice of analytical psychology. © 2017, The Society of Analytical Psychology.

  10. Motion Recognition and Modifying Motion Generation for Imitation Robot Based on Motion Knowledge Formation

    NASA Astrophysics Data System (ADS)

    Okuzawa, Yuki; Kato, Shohei; Kanoh, Masayoshi; Itoh, Hidenori

    A knowledge-based approach to imitation learning of motion generation for humanoid robots and an imitative motion generation system based on motion knowledge learning and modification are described. The system has three parts: recognizing, learning, and modifying parts. The first part recognizes an instructed motion distinguishing it from the motion knowledge database by the continuous hidden markov model. When the motion is recognized as being unfamiliar, the second part learns it using locally weighted regression and acquires a knowledge of the motion. When a robot recognizes the instructed motion as familiar or judges that its acquired knowledge is applicable to the motion generation, the third part imitates the instructed motion by modifying a learned motion. This paper reports some performance results: the motion imitation of several radio gymnastics motions.

  11. A developmental roadmap for learning by imitation in robots.

    PubMed

    Lopes, Manuel; Santos-Victor, José

    2007-04-01

    In this paper, we present a strategy whereby a robot acquires the capability to learn by imitation following a developmental pathway consisting on three levels: 1) sensory-motor coordination; 2) world interaction; and 3) imitation. With these stages, the system is able to learn tasks by imitating human demonstrators. We describe results of the different developmental stages, involving perceptual and motor skills, implemented in our humanoid robot, Baltazar. At each stage, the system's attention is drawn toward different entities: its own body and, later on, objects and people. Our main contributions are the general architecture and the implementation of all the necessary modules until imitation capabilities are eventually acquired by the robot. Also, several other contributions are made at each level: learning of sensory-motor maps for redundant robots, a novel method for learning how to grasp objects, and a framework for learning task description from observation for program-level imitation. Finally, vision is used extensively as the sole sensing modality (sometimes in a simplified setting) avoiding the need for special data-acquisition hardware.

  12. Assisted Perception, Planning and Control for Remote Mobility and Dexterous Manipulation

    DTIC Science & Technology

    2017-04-01

    on unmanned aerial vehicles (UAVs). The underlying algorithm is based on an Extended Kalman Filter (EKF) that simultaneously estimates robot state...and sensor biases. The filter developed provided a probabilistic fusion of sensor data from many modalities to produce a single consistent position...estimation for a walking humanoid. Given a prior map using a Gaussian particle filter , the LIDAR based system is able to provide a drift-free

  13. Foundations for a Theory of Mind for a Humanoid Robot

    DTIC Science & Technology

    2001-05-01

    visual processing software and our understanding of how people interact with Lazlo. Thanks also to Jessica Banks, Charlie Kemp, and Juan Velasquez for...capacities of infants (e.g., Carey, 1999; Gelman , 1990). Furthermore, research on pervasive developmental disorders such as autism has focused on the se...Keil, 1995; Carey, 1995; Gelman et al., 1983). While the discrimination of animate from inanimate certainly relies upon many distinct properties

  14. Locomotion training of legged robots using hybrid machine learning techniques

    NASA Technical Reports Server (NTRS)

    Simon, William E.; Doerschuk, Peggy I.; Zhang, Wen-Ran; Li, Andrew L.

    1995-01-01

    In this study artificial neural networks and fuzzy logic are used to control the jumping behavior of a three-link uniped robot. The biped locomotion control problem is an increment of the uniped locomotion control. Study of legged locomotion dynamics indicates that a hierarchical controller is required to control the behavior of a legged robot. A structured control strategy is suggested which includes navigator, motion planner, biped coordinator and uniped controllers. A three-link uniped robot simulation is developed to be used as the plant. Neurocontrollers were trained both online and offline. In the case of on-line training, a reinforcement learning technique was used to train the neurocontroller to make the robot jump to a specified height. After several hundred iterations of training, the plant output achieved an accuracy of 7.4%. However, when jump distance and body angular momentum were also included in the control objectives, training time became impractically long. In the case of off-line training, a three-layered backpropagation (BP) network was first used with three inputs, three outputs and 15 to 40 hidden nodes. Pre-generated data were presented to the network with a learning rate as low as 0.003 in order to reach convergence. The low learning rate required for convergence resulted in a very slow training process which took weeks to learn 460 examples. After training, performance of the neurocontroller was rather poor. Consequently, the BP network was replaced by a Cerebeller Model Articulation Controller (CMAC) network. Subsequent experiments described in this document show that the CMAC network is more suitable to the solution of uniped locomotion control problems in terms of both learning efficiency and performance. A new approach is introduced in this report, viz., a self-organizing multiagent cerebeller model for fuzzy-neural control of uniped locomotion is suggested to improve training efficiency. This is currently being evaluated for a possible patent by NASA, Johnson Space Center. An alternative modular approach is also developed which uses separate controllers for each stage of the running stride. A self-organizing fuzzy-neural controller controls the height, distance and angular momentum of the stride. A CMAC-based controller controls the movement of the leg from the time the foot leaves the ground to the time of landing. Because the leg joints are controlled at each time step during flight, movement is smooth and obstacles can be avoided. Initial results indicate that this approach can yield fast, accurate results.

  15. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences

    PubMed Central

    2016-01-01

    Emerging studies indicate that several species such as corvids, apes and children solve ‘The Crow and the Pitcher’ task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause–effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended ‘learning–prediction–abstraction’ loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration. PMID:27466440

  16. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences.

    PubMed

    Bhat, Ajaz Ahmad; Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2016-07-01

    Emerging studies indicate that several species such as corvids, apes and children solve 'The Crow and the Pitcher' task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause-effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended 'learning-prediction-abstraction' loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration. © 2016 The Author(s).

  17. Predictive Interfaces for Long-Distance Tele-Operations

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas

    2005-01-01

    We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.

  18. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    NASA Astrophysics Data System (ADS)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  19. The Affordance Template ROS Package for Robot Task Programming

    NASA Technical Reports Server (NTRS)

    Hart, Stephen; Dinh, Paul; Hambuchen, Kimberly

    2015-01-01

    This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic XML description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications.

  20. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  1. Robotic Assistance in Medication Management: Development and Evaluation of a Prototype.

    PubMed

    Schweitzer, Marco; Hoerbst, Alexander

    2016-01-01

    An increasing number of elderly people and the prevalence of multimorbid conditions often lead to age-related problems for patients in handling their common polypharmaceutical, domestic everyday medication. Ambient Assisted Living therefore provides means to support an elderly's everyday life. In the present paper we investigated the viability of using a commercial mass-produced humanoid robot system to support the domestic medication of an elderly person. A prototypical software application based on the NAO-robot platform was implemented to remind the patient for drug intakes, check for drug-drug-interactions, document the compliance and assist through the complete process of individual medication. A technical and functional evaluation of the system in a laboratory setting revealed versatile and viable results, though further investigations are needed to examine the practical use in an applied field.

  2. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture.

    PubMed

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2013-01-01

    The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value.

  3. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture

    PubMed Central

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2013-01-01

    The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value. PMID:23675345

  4. Robotics and artificial intelligence: Jewish ethical perspectives.

    PubMed

    Rappaport, Z H

    2006-01-01

    In 16th Century Prague, Rabbi Loew created a Golem, a humanoid made of clay, to protect his community. When the Golem became too dangerous to his surroundings, he was dismantled. This Jewish theme illustrates some of the guiding principles in its approach to the moral dilemmas inherent in future technologies, such as artificial intelligence and robotics. Man is viewed as having received the power to improve upon creation and develop technologies to achieve them, with the proviso that appropriate safeguards are taken. Ethically, not-harming is viewed as taking precedence over promoting good. Jewish ethical thinking approaches these novel technological possibilities with a cautious optimism that mankind will derive their benefits without coming to harm.

  5. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    PubMed Central

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  6. Space Shuttle Discovery is Prepared for Launch

    NASA Image and Video Library

    2011-02-23

    The space shuttle Discovery is seen shortly after the Rotating Service Structure was rolled back at launch pad 39A, at the Kennedy Space Center in Cape Canaveral, Florida, on Wednesday, Feb. 23, 2011. Discovery, on its 39th and final flight, will carry the Italian-built Permanent Multipurpose Module (PMM), Express Logistics Carrier 4 (ELC4) and Robonaut 2, the first humanoid robot in space to the International Space Station. Photo Credit: (NASA/Bill Ingalls)

  7. Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver’s Gift

    DTIC Science & Technology

    2004-05-01

    system . We propose a real- time algorithm to infer depth and build 3-dimensional coarse maps for objects through the analysis of cues provided by an... system is well defined at the boundary of these regions (although the derivatives are not). A time domain analysis is presented for a piece-linear... Analysis of Multivariable Systems ......................... 266 D.3.1 Networks of Multiple Neural Oscillators ................. 266 D.3.2 Networks of

  8. Long-term dynamics of freshwater red tide in shallow lake in central Japan.

    PubMed

    Hirabayashi, Kimio; Yoshizawa, Kazuya; Yoshida, Norihiko; Ariizumi, Kazunori; Kazama, Futaba

    2007-01-01

    The aim of this study is to clarify the long-term dynamics of the red tide occurring in Lake Kawaguchi. The measurement of environmental factors and water sampling were carried out monthly at a fixed station in Lake Kawaguchi's center basin from April 1993 to March 2004. On June 26, 1995, the horizontal distribution ofPeridinium bipes was investigated using a plastic pipe, obtaining 0∼1-m layers of water column samples at 68 locations across the entire lake. P. bipes showed an explosive growth and formed a freshwater red tide in the early summer of 1995, when the nutrient level was higher than those in the other years, particularly the phosphate concentration in the surface layer. The dissolved total phosphorus (DTP) concentration was sufficient forP. bipes growth in that year. In the study of its horizontal distribution,P. bipes was found at all the locations. The numbers of cells per milliliter ranged from 67 to 5360, averaging 1094±987 cells/ml, with particularly high densities along the northern shore. Since then,P. bipes has annually averaged about 25 cells/ml in Lake Kawaguchi. We observed that the red tide caused byP. bipes correlates with a high DTP concentration in Lake Kawaguchi.

  9. Control strategies for robots in contact

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung

    In the field of robotics, there is a growing need to provide robots with the ability to interact with complex and unstructured environments. Operations in such environments pose significant challenges in terms of sensing, planning, and control. In particular, it is critical to design control algorithms that account for the dynamics of the robot and environment at multiple contacts. The work in this thesis focuses on the development of a control framework that addresses these issues. The approaches are based on the operational space control framework and estimation methods. By accounting for the dynamics of the robot and environment, modular and systematic methods are developed for robots interacting with the environment at multiple locations. The proposed force control approach demonstrates high performance in the presence of uncertainties. Building on this basic capability, new control algorithms have been developed for haptic teleoperation, multi-contact interaction with the environment, and whole body motion of non-fixed based robots. These control strategies have been experimentally validated through simulations and implementations on physical robots. The results demonstrate the effectiveness of the new control structure and its robustness to uncertainties. The contact control strategies presented in this thesis are expected to contribute to the needs in advanced controller design for humanoid and other complex robots interacting with their environments.

  10. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    PubMed

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  11. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke.

    PubMed

    Johnson, Michelle J

    2006-12-18

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  12. The AGINAO Self-Programming Engine

    NASA Astrophysics Data System (ADS)

    Skaba, Wojciech

    2013-01-01

    The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.

  13. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke

    PubMed Central

    Johnson, Michelle J

    2006-01-01

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home. PMID:17176474

  14. Socially grounded game strategy enhances bonding and perceived smartness of a humanoid robot

    NASA Astrophysics Data System (ADS)

    Barakova, E. I.; De Haas, M.; Kuijpers, W.; Irigoyen, N.; Betancourt, A.

    2018-01-01

    In search for better technological solutions for education, we adapted a principle from economic game theory, namely that giving a help will promote collaboration and eventually long-term relations between a robot and a child. This principle has been shown to be effective in games between humans and between humans and computer agents. We compared the social and cognitive engagement of children when playing checkers game combined with a social strategy against a robot or against a computer. We found that by combining the social and game strategy the children (average age of 8.3 years) had more empathy and social engagement with the robot since the children did not want to necessarily win against it. This finding is promising for using social strategies for the creation of long-term relations between robots and children and making educational tasks more engaging. An additional outcome of the study was the significant difference in the perception of the children about the difficulty of the game - the game with the robot was seen as more challenging and the robot - as a smarter opponent. This finding might be due to the higher perceived or expected intelligence from the robot, or because of the higher complexity of seeing patterns in three-dimensional world.

  15. Progress in EEG-Based Brain Robot Interaction Systems

    PubMed Central

    Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe

    2017-01-01

    The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488

  16. Simulated Lidar Images of Human Pose using a 3DS Max Virtual Laboratory

    DTIC Science & Technology

    2015-12-01

    developed in Autodesk 3DS Max, with an animated, biofidelic 3D human mesh biped character ( avatar ) as the subject. The biped animation modifies the digital...character ( avatar ) as the subject. The biped animation modifies the digital human model through a time sequence of motion capture data representing an...AFB. Mr. Isiah Davenport from Infoscitex Corp developed the method for creating the biofidelic avatars from laboratory data and 3DS Max code for

  17. Primate Anatomy, Kinematics, and Principles for Humanoid Design

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Ambrose, Catherine G.

    2004-01-01

    The primate order of animals is investigated for clues in the design of Humanoid Robots. The pursuit is directed with a theory that kinematics, musculature, perception, and cognition can be optimized for specific tasks by varying the proportions of limbs, and in particular, the points of branching in kinematic trees such as the primate skeleton. Called the Bifurcated Chain Hypothesis, the theory is that the branching proportions found in humans may be superior to other animals and primates for the tasks of dexterous manipulation and other human specialties. The primate taxa are defined, contemporary primate evolution hypotheses are critiqued, and variations within the order are noted. The kinematic branching points of the torso, limbs and fingers are studied for differences in proportions across the order, and associated with family and genus capabilities and behaviors. The human configuration of a long waist, long neck, and short arms is graded using a kinematic workspace analysis and a set of design axioms for mobile manipulation robots. It scores well. The re emergence of the human waist, seen in early Prosimians and Monkeys for arboreal balance, but lost in the terrestrial Pongidae, is postulated as benefiting human dexterity. The human combination of an articulated waist and neck will be shown to enable the use of smaller arms, achieving greater regions of workspace dexterity than the larger limbs of Gorillas and other Hominoidea.

  18. Elastic MCF Rubber with Photovoltaics and Sensing for Use as Artificial or Hybrid Skin (H-Skin): 1st Report on Dry-Type Solar Cell Rubber with Piezoelectricity for Compressive Sensing.

    PubMed

    Shimada, Kunio

    2018-06-05

    Ordinary solar cells are very difficult to bend, squash by compression, or extend by tensile strength. However, if they were to possess elastic, flexible, and extensible properties, in addition to piezo-electricity and resistivity, they could be put to effective use as artificial skin installed over human-like robots or humanoids. Further, it could serve as a husk that generates electric power from solar energy and perceives any force or temperature changes. Therefore, we propose a new type of artificial skin, called hybrid skin (H-Skin), for a humanoid robot having hybrid functions. In this study, a novel elastic solar cell is developed from natural rubber that is electrolytically polymerized with a configuration of magnetic clusters of metal particles incorporated into the rubber, by applying a magnetic field. The material thus produced is named magnetic compound fluid rubber (MCF rubber) that is elastic, flexible, and extensible. The present report deals with a dry-type MCF rubber solar cell that uses photosensitized dye molecules. First, the photovoltaic mechanism in the material is investigated. Next, the changes in the photovoltaic properties of its molecules due to irradiation by visible light are measured under compression. The effect of the compression on its piezoelectric properties is investigated.

  19. Human motion characteristics in relation to feeling familiar or frightened during an announced short interaction with a proactive humanoid.

    PubMed

    Baddoura, Ritta; Venture, Gentiane

    2014-01-01

    During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions.

  20. Seeing Minds in Others – Can Agents with Robotic Appearance Have Human-Like Preferences?

    PubMed Central

    Martini, Molly C.; Gonzalez, Christian A.; Wiese, Eva

    2016-01-01

    Ascribing mental states to non-human agents has been shown to increase their likeability and lead to better joint-task performance in human-robot interaction (HRI). However, it is currently unclear what physical features non-human agents need to possess in order to trigger mind attribution and whether different aspects of having a mind (e.g., feeling pain, being able to move) need different levels of human-likeness before they are readily ascribed to non-human agents. The current study addresses this issue by modeling how increasing the degree of human-like appearance (on a spectrum from mechanistic to humanoid to human) changes the likelihood by which mind is attributed towards non-human agents. We also test whether different internal states (e.g., being hungry, being alive) need different degrees of humanness before they are ascribed to non-human agents. The results suggest that the relationship between physical appearance and the degree to which mind is attributed to non-human agents is best described as a two-linear model with no change in mind attribution on the spectrum from mechanistic to humanoid robot, but a significant increase in mind attribution as soon as human features are included in the image. There seems to be a qualitative difference in the perception of mindful versus mindless agents given that increasing human-like appearance alone does not increase mind attribution until a certain threshold is reached, that is: agents need to be classified as having a mind first before the addition of more human-like features significantly increases the degree to which mind is attributed to that agent. PMID:26745500

  1. Human motion characteristics in relation to feeling familiar or frightened during an announced short interaction with a proactive humanoid

    PubMed Central

    Baddoura, Ritta; Venture, Gentiane

    2014-01-01

    During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions. PMID:24688466

  2. Robot Comedy Lab: experimenting with the social dynamics of live performance

    PubMed Central

    Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias

    2015-01-01

    The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts. PMID:26379585

  3. Examples of design and achievement of vision systems for mobile robotics applications

    NASA Astrophysics Data System (ADS)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  4. Hybrid position and orientation tracking for a passive rehabilitation table-top robot.

    PubMed

    Wojewoda, K K; Culmer, P R; Gallagher, J F; Jackson, A E; Levesley, M C

    2017-07-01

    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy.

  5. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  6. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  7. Future Challenges of Robotics and Artificial Intelligence in Nursing: What Can We Learn from Monsters in Popular Culture?

    PubMed

    Erikson, Henrik; Salzmann-Erikson, Martin

    It is highly likely that artificial intelligence (AI) will be implemented in nursing robotics in various forms, both in medical and surgical robotic instruments, but also as different types of droids and humanoids, physical reinforcements, and also animal/pet robots. Exploring and discussing AI and robotics in nursing and health care before these tools become commonplace is of great importance. We propose that monsters in popular culture might be studied with the hope of learning about situations and relationships that generate empathic capacities in their monstrous existences. The aim of the article is to introduce the theoretical framework and assumptions behind this idea. Both robots and monsters are posthuman creations. The knowledge we present here gives ideas about how nursing science can address the postmodern, technologic, and global world to come. Monsters therefore serve as an entrance to explore technologic innovations such as AI. Analyzing when and why monsters step out of character can provide important insights into the conceptualization of caring and nursing as a science, which is important for discussing these empathic protocols, as well as more general insight into human knowledge. The relationship between caring, monsters, robotics, and AI is not as farfetched as it might seem at first glance.

  8. Future Challenges of Robotics and Artificial Intelligence in Nursing: What Can We Learn from Monsters in Popular Culture?

    PubMed Central

    Erikson, Henrik; Salzmann-Erikson, Martin

    2016-01-01

    It is highly likely that artificial intelligence (AI) will be implemented in nursing robotics in various forms, both in medical and surgical robotic instruments, but also as different types of droids and humanoids, physical reinforcements, and also animal/pet robots. Exploring and discussing AI and robotics in nursing and health care before these tools become commonplace is of great importance. We propose that monsters in popular culture might be studied with the hope of learning about situations and relationships that generate empathic capacities in their monstrous existences. The aim of the article is to introduce the theoretical framework and assumptions behind this idea. Both robots and monsters are posthuman creations. The knowledge we present here gives ideas about how nursing science can address the postmodern, technologic, and global world to come. Monsters therefore serve as an entrance to explore technologic innovations such as AI. Analyzing when and why monsters step out of character can provide important insights into the conceptualization of caring and nursing as a science, which is important for discussing these empathic protocols, as well as more general insight into human knowledge. The relationship between caring, monsters, robotics, and AI is not as farfetched as it might seem at first glance. PMID:27455058

  9. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  10. [Supporting an ASD child with digital tools].

    PubMed

    Vallart, Etienne; Gicquel, Ludovic

    Autism spectrum disorders lead to a long-term and severe impairment of communication and social interactions. The expansion of information and communication technologies, through digital applications which can be used on different devices, can be used to support these functions necessary for the development of children with ASD. Applications, serious games and even humanoid robots help to boost children's interest in learning. They must however form part of a broader range of therapies. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  11. KSC-2010-4379

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, Ron Diftler, NASA Robonaut project manager, describes the animation of the dexterous humanoid astronaut helper, Robonaut (R2) to the media. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  12. KSC-2010-4378

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, Ron Diftler, NASA Robonaut project manager, describes the animation of the dexterous humanoid astronaut helper, Robonaut (R2) to the media. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  13. Robotics in biomedical chromatography and electrophoresis.

    PubMed

    Fouda, H G

    1989-08-11

    The ideal laboratory robot can be viewed as "an indefatigable assistant capable of working continuously for 24 h a day with constant efficiency". The development of a system approaching that promise requires considerable skill and time commitment, a thorough understanding of the capabilities and limitations of the robot and its specialized modules and an intimate knowledge of the functions to be automated. The robot need not emulate every manual step. Effective substitutes for difficult steps must be devised. The future of laboratory robots depends not only on technological advances in other fields, but also on the skill and creativity of chromatographers and other scientists. The robot has been applied to automate numerous biomedical chromatography and electrophoresis methods. The quality of its data can approach, and in some cases exceed, that of manual methods. Maintaining high data quality during continuous operation requires frequent maintenance and validation. Well designed robotic systems can yield substantial increase in the laboratory productivity without a corresponding increase in manpower. They can free skilled personnel from mundane tasks and can enhance the safety of the laboratory environment. The integration of robotics, chromatography systems and laboratory information management systems permits full automation and affords opportunities for unattended method development and for future incorporation of artificial intelligence techniques and the evolution of expert systems. Finally, humanoid attributes aside, robotic utilization in the laboratory should not be an end in itself. The robot is a useful tool that should be utilized only when it is prudent and cost-effective to do so.

  14. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  15. Humanoids Designed to do Work

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron

    2001-01-01

    NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.

  16. Numerical Estimation of Balanced and Falling States for Constrained Legged Systems

    NASA Astrophysics Data System (ADS)

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.

    2017-08-01

    Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs. elbow-down). For both 1- and 2-DOF systems, the results are validated in simulation environments. Finally, the manifold for a biped walking robot is constructed and illustrated against its single-support walking trajectories. The manifold identified by the proposed framework for any given legged system can be evaluated beforehand as a system property and serves as a map for either a specified state or a specific controller's performance.

  17. Referral of sensation to an advanced humanoid robotic hand prosthesis.

    PubMed

    Rosén, Birgitta; Ehrsson, H Henrik; Antfolk, Christian; Cipriani, Christian; Sebelius, Fredrik; Lundborg, Göran

    2009-01-01

    Hand prostheses that are currently available on the market are used by amputees to only a limited extent, partly because of lack of sensory feedback from the artificial hand. We report a pilot study that showed how amputees can experience a robot-like advanced hand prosthesis as part of their own body. We induced a perceptual illusion by which touch applied to the stump of the arm was experienced from the artificial hand. This illusion was elicited by applying synchronous tactile stimulation to the hidden amputation stump and the robotic hand prosthesis in full view. In five people who had had upper limb amputations this stimulation caused referral touch sensation from the stump to the artificial hand, and the prosthesis was experienced more like a real hand. We also showed that this illusion can work when the amputee controls the movements of the artificial hand by recordings of the arm muscle activity with electromyograms. These observations indicate that the previously described "rubber hand illusion" is also valid for an advanced hand prosthesis, even when it has a robotic-like appearance.

  18. Centaur: A Mobile Dexterous Humanoid for Surface Operations

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Ambrose, Robert O.; Goza, S. Michael; Junkin, Lucien; Neuhaus, Peter D.; Pratt, Jerry E.

    2005-01-01

    Future human and robotic planetary expeditions could benefit greatly from expanded Extra-Vehicular Activity (EVA) capabilities supporting a broad range of multiple, concurrent surface operations. Risky, expensive and complex, conventional EVAs are restricted in both duration and scope by consumables and available manpower, creating a resource management problem. A mobile, highly dexterous Extra-Vehicular Robotic (EVR) system called Centaur is proposed to cost-effectively augment human astronauts on surface excursions. The Centaur design combines a highly capable wheeled mobility platform with an anthropomorphic upper body mounted on a three degree-of-freedom waist. Able to use many ordinary handheld tools, the robot could conserve EVA hours by relieving humans of many routine inspection and maintenance chores and assisting them in more complex tasks, such as repairing other robots. As an astronaut surrogate, Centaur could take risks unacceptable to humans, respond more quickly to EVA emergencies and work much longer shifts. Though originally conceived as a system for planetary surface exploration, the Centaur concept could easily be adapted for terrestrial military applications such as de-Gig, surveillance and other hazardous duties.

  19. Reading sadness beyond human faces.

    PubMed

    Chammat, Mariam; Foucher, Aurélie; Nadel, Jacqueline; Dubal, Stéphanie

    2010-08-12

    Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  1. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions

    PubMed Central

    Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris

    2012-01-01

    Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639

  2. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    PubMed

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  3. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks

    PubMed Central

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050

  4. USA Science and Engineering Festival 2014

    NASA Image and Video Library

    2014-04-25

    An attendee of the USA Science and Engineering Festival observes Robonaut 2 at the NASA Stage. Robonaut 2 is NASA's first dexterous humanoid robot that has been working on the International Space Station for the last three years. R2 recently received 1.2 meter long legs to allow mobility. This will enable R2 to assist more with regular and repetitive tasks inside and outside the station. The USA Science and Engineering Festival took place at the Washington Convention Center in Washington, DC on April 26 and 27, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  5. USA Science and Engineering Festival 2014

    NASA Image and Video Library

    2014-04-25

    Two boys attending the USA Science and Engineering Festival pose with Robonaut 2 at the NASA Stage. Robonaut 2 is NASA's first dexterous humanoid robot that has been working on the International Space Station for the last three years. R2 recently received 1.2 meter long legs to allow mobility. This will enable R2 to assist more with regular and repetitive tasks inside and outside the station. The USA Science and Engineering Festival took place at the Washington Convention Center in Washington, DC on April 26 and 27, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  6. Regularity in an environment produces an internal torque pattern for biped balance control.

    PubMed

    Ito, Satoshi; Kawasaki, Haruhisa

    2005-04-01

    In this paper, we present a control method for achieving biped static balance under unknown periodic external forces whose periods are only known. In order to maintain static balance adaptively in an uncertain environment, it is essential to have information on the ground reaction forces. However, when the biped is exposed to a steady environment that provides an external force periodically, uncertain factors on the regularity with respect to a steady environment are gradually clarified using learning process, and finally a torque pattern for balancing motion is acquired. Consequently, static balance is maintained without feedback from ground reaction forces and achieved in a feedforward manner.

  7. The Ear and Hearing in Bipes biporus

    PubMed Central

    Wever, Ernest Glen; Gans, Carl

    1972-01-01

    The sound conduction system of Bipes biporus is unusual among amphisbaenians, in that the columella does not have a catilaginous or bony extra-columella passing laterally to the labial skin. Instead, the terminal disk of the columella ends in fibrous tissue beneath a deep fold of skin forming the nuchal constriction. The occurrence of an epihyal supports earlier suggestions that the amphisbaenian extracolumella may be homologous to the epihyal. Measurements of cochlear potentials, made by direction of the sound stimuli to the region of the head posteroventral to the quadrate bone, show that Bipes biporus ranks high among amphisbaenians in auditory sensitivity. Images PMID:4506791

  8. Walking on a moving surface: energy-optimal walking motions on a shaky bridge and a shaking treadmill can reduce energy costs below normal.

    PubMed

    Joshi, Varun; Srinivasan, Manoj

    2015-02-08

    Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations.

  9. Walking on a moving surface: energy-optimal walking motions on a shaky bridge and a shaking treadmill can reduce energy costs below normal

    PubMed Central

    Joshi, Varun; Srinivasan, Manoj

    2015-01-01

    Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations. PMID:25663810

  10. Securing Safety with Sensors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Robot Systems Technology Branch at NASA's Johnson Space Center collaborated with the Defense Advanced Research Projects Agency to design Robonaut, a humanoid robot developed to assist astronauts with Extra Vehicular Activities (EVA) such as space structure assembly and repair operations. By working side-by-side with astronauts or going where risks are too great for people, Robonaut is expected to expand the Space Agency s ability for construction and discovery. NASA engineers equipped Robonaut with human-looking, dexterous hands complete with five fingers to accomplish its tasks. The Robonaut hand is one of the first being developed for space EVA use and is the closest in size and capability to a suited astronaut s hand. As part of the development process, an advanced sensor system was needed to provide an improved method to measure the movement and forces exerted by Robonaut s forearms and hands.

  11. Modeling and Classifying Six-Dimensional Trajectories for Teleoperation Under a Time Delay

    NASA Technical Reports Server (NTRS)

    SunSpiral, Vytas; Wheeler, Kevin R.; Allan, Mark B.; Martin, Rodney

    2006-01-01

    Within the context of teleoperating the JSC Robonaut humanoid robot under 2-10 second time delays, this paper explores the technical problem of modeling and classifying human motions represented as six-dimensional (position and orientation) trajectories. A dual path research agenda is reviewed which explored both deterministic approaches and stochastic approaches using Hidden Markov Models. Finally, recent results are shown from a new model which represents the fusion of these two research paths. Questions are also raised about the possibility of automatically generating autonomous actions by reusing the same predictive models of human behavior to be the source of autonomous control. This approach changes the role of teleoperation from being a stand-in for autonomy into the first data collection step for developing generative models capable of autonomous control of the robot.

  12. How long did it last? You would better ask a human

    PubMed Central

    Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694

  13. How long did it last? You would better ask a human.

    PubMed

    Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.

  14. Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

    PubMed Central

    Taniguchi, Akira; Taniguchi, Tadahiro; Cangelosi, Angelo

    2017-01-01

    In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. PMID:29311888

  15. Towards Machine Learning of Motor Skills

    NASA Astrophysics Data System (ADS)

    Peters, Jan; Schaal, Stefan; Schölkopf, Bernhard

    Autonomous robots that can adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

  16. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  17. Arash: A social robot buddy to support children with cancer in a hospital environment.

    PubMed

    Meghdari, Ali; Shariati, Azadeh; Alemi, Minoo; Vossoughi, Gholamreza R; Eydi, Abdollah; Ahmadi, Ehsan; Mozafari, Behrad; Amoozandeh Nobaveh, Ali; Tahami, Reza

    2018-06-01

    This article presents the thorough design procedure, specifications, and performance of a mobile social robot friend Arash for educational and therapeutic involvement of children with cancer based on their interests and needs. Our research focuses on employing Arash in a pediatric hospital environment to entertain, assist, and educate children with cancer who suffer from physical pain caused by both the disease and its treatment process. Since cancer treatment causes emotional distress, which can reduce the efficiency of medications, using social robots to interact with children with cancer in a hospital environment could decrease this distress, thereby improving the effectiveness of their treatment. Arash is a 15 degree-of-freedom low-cost humanoid mobile robot buddy, carefully designed with appropriate measures and developed to interact with children ages 5-12 years old. The robot has five physical subsystems: the head, arms, torso, waist, and mobile-platform. The robot's final appearance is a significant novel concept; since it was selected based on a survey taken from 50 children with chronic diseases at three pediatric hospitals in Tehran, Iran. Founded on these measures and desires, Arash was designed, built, improved, and enhanced to operate successfully in pediatric cancer hospitals. Two experiments were devised to evaluate the children's level of acceptance and involvement with the robot, assess their feelings about it, and measure how much the robot was similar to the favored conceptual sketch. Both experiments were conducted in the form of storytelling and appearance/performance evaluations. The obtained results confirm high engagement and interest of pediatric cancer patients with the constructed robot.

  18. Towards Autonomous Operation of Robonaut 2

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hart, Stephen W.; Yamokoski, J. D.

    2011-01-01

    The Robonaut 2 (R2) platform, as shown in Figure 1, was designed through a collaboration between NASA and General Motors to be a capable robotic assistant with the dexterity similar to a suited astronaut [1]. An R2 robot was sent to the International Space Station (ISS) in February 2011 and, in doing so, became the first humanoid robot in space. Its capabilities are presently being tested and expanded to increase its usefulness to the crew. Current work on R2 includes the addition of a mobility platform to allow the robot to complete tasks (such as cleaning, maintenance, or simple construction activities) both inside and outside of the ISS. To support these new activities, R2's software architecture is being developed to provide efficient ways of programming robust and autonomous behavior. In particular, a multi-tiered software architecture is proposed that combines principles of low-level feedback control with higher-level planners that accomplish behavioral goals at the task level given the run-time context, user constraints, the health of the system, and so on. The proposed architecture is shown in Figure 2. At the lowest-level, the resource level, there exists the various sensory and motor signals available to the system. The sensory signals for a robot such as R2 include multiple channels of force/torque data, joint or Cartesian positions calculated through the robot's proprioception, and signals derived from objects observable by its cameras.

  19. In our own image? Emotional and neural processing differences when observing human–human vs human–robot interactions

    PubMed Central

    Wang, Yin

    2015-01-01

    Notwithstanding the significant role that human–robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human–human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal–parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. PMID:25911418

  20. From self-observation to imitation: visuomotor association on a robotic hand.

    PubMed

    Chaminade, Thierry; Oztop, Erhan; Cheng, Gordon; Kawato, Mitsuo

    2008-04-15

    Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.

  1. Ethica ex machina: issues in roboethics.

    PubMed

    Mushiaki, Shigeru

    2013-12-01

    Is "roboethics" the "ethics of humans" or the "ethics of robots"? According to the Roboethics Roadmap (Gianmarco Veruggio), it is the human ethics of robot designers, manufacturers, and users. And ifroboethics roots deeply in society, artificial ethics (ethics of robots) might be put on the agenda some day. At the 1st International Symposium on Roboethics in San Remo, Ronald C. Arkin gave the presentation "Bombs, Bonding, and Bondage: Human-Robot Interaction and Related Ethical Issues" (2004). "Bondage" is the issue of enslavement and possible rebellion of robots. "Bombs" is the issue of military use of robots. And "bonding" is the issue of affective, emotional attachment of humans to robots. I contrast two extreme attitudes towards the issue of "bonding" and propose a middle ground. "Anthropomorphism" has two meanings. First, it means "human-shaped-ness." Second, it means "attribution of human characteristics or feelings to a nonhuman being (god, animal, or object)" (personification, empathy). Some say that Japanese (or East Asians) hold "animism," which makes it easy for them to treat robots like animated beings (to anthropomorphize robots); hence "Robot Kingdom Japan." Cosima Wagner criticizes such exaggeration and oversimplification as "invented tradition". I reinforce her argument with neuroscientific findings and argue that such "animism" is neither Shintoistic nor Buddhistic, but a universal tendency. Roboticists, especially Japanese roboticists emphasize that robotics is "anthropology." It is true that through the construction of humanoid robots we can better understand human beings (so-called "constructive approach"). But at the same time, we must not forget that robotic technology, like any other technology, changes our way of living and being--deeply: it can bring about our ontological transformation. In this sense, the governance of robotic technology is "governed governance." The interdisciplinary research area of technology assessment studies (TAS) will gain much importance. And we should always be ready to rethink the direction of the research and development of robotic technology, bearing the desirable future of human society in mind.

  2. Gradient Learning Algorithms for Ontology Computing

    PubMed Central

    Gao, Wei; Zhu, Linli

    2014-01-01

    The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752

  3. Tactile Gloves for Autonomous Grasping With the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Martin, T. B.; Ambrose, R. O.; Diftler, M. A.; Platt, R., Jr.; Butzer, M. J.

    2004-01-01

    Tactile data from rugged gloves are providing the foundation for developing autonomous grasping skills for the NASA/DARPA Robonaut, a dexterous humanoid robot. These custom gloves compliment the human like dexterity available in the Robonaut hands. Multiple versions of the gloves are discussed, showing a progression in using advanced materials and construction techniques to enhance sensitivity and overall sensor coverage. The force data provided by the gloves can be used to improve dexterous, tool and power grasping primitives. Experiments with the latest gloves focus on the use of tools, specifically a power drill used to approximate an astronaut's torque tool.

  4. New insights into olivo-cerebellar circuits for learning from a small training sample.

    PubMed

    Tokuda, Isao T; Hoang, Huu; Kawato, Mitsuo

    2017-10-01

    Artificial intelligence such as deep neural networks exhibited remarkable performance in simulated video games and 'Go'. In contrast, most humanoid robots in the DARPA Robotics Challenge fell down to ground. The dramatic contrast in performance is mainly due to differences in the amount of training data, which is huge and small, respectively. Animals are not allowed with millions of the failed trials, which lead to injury and death. Humans fall only several thousand times before they balance and walk. We hypothesize that a unique closed-loop neural circuit formed by the Purkinje cells, the cerebellar deep nucleus and the inferior olive in and around the cerebellum and the highest density of gap junctions, which regulate synchronous activities of the inferior olive nucleus, are computational machinery for learning from a small sample. We discuss recent experimental and computational advances associated with this hypothesis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Teleoperation of Robonaut Using Finger Tracking

    NASA Technical Reports Server (NTRS)

    Champoux, Rachel G.; Luo, Victor

    2012-01-01

    With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.

  6. Robotic hand with locking mechanism using TCP muscles for applications in prosthetic hand and humanoids

    NASA Astrophysics Data System (ADS)

    Saharan, Lokesh; Tadesse, Yonas

    2016-04-01

    This paper presents a biomimetic, lightweight, 3D printed and customizable robotic hand with locking mechanism consisting of Twisted and Coiled Polymer (TCP) muscles based on nylon precursor fibers as artificial muscles. Previously, we have presented a small-sized biomimetic hand using nylon based artificial muscles and fishing line muscles as actuators. The current study focuses on an adult-sized prosthetic hand with improved design and a position/force locking system. Energy efficiency is always a matter of concern to make compact, lightweight, durable and cost effective devices. In natural human hand, if we keep holding objects for long time, we get tired because of continuous use of energy for keeping the fingers in certain positions. Similarly, in prosthetic hands we also need to provide energy continuously to artificial muscles to hold the object for a certain period of time, which is certainly not energy efficient. In this work we, describe the design of the robotic hand and locking mechanism along with the experimental results on the performance of the locking mechanism.

  7. Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.

    PubMed

    Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela

    2015-12-01

    There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment

    PubMed Central

    Yamashita, Yuichi; Tani, Jun

    2008-01-01

    It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems. PMID:18989398

  9. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence

    PubMed Central

    Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine

    2012-01-01

    Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents. PMID:22586381

  10. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence.

    PubMed

    Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine

    2012-01-01

    Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents.

  11. Transparent actuators and robots based on single-layer superaligned carbon nanotube sheet and polymer composites.

    PubMed

    Chen, Luzhuo; Weng, Mingcen; Zhang, Wei; Zhou, Zhiwei; Zhou, Yi; Xia, Dan; Li, Jiaxin; Huang, Zhigao; Liu, Changhong; Fan, Shoushan

    2016-03-28

    Transparent actuators have been attracting emerging interest recently, as they demonstrate potential applications in the fields of invisible robots, tactical displays, variable-focus lenses, and flexible cellular phones. However, previous technologies did not simultaneously realize macroscopic transparent actuators with advantages of large-shape deformation, low-voltage-driven actuation and fast fabrication. Here, we develop a fast approach to fabricate a high-performance transparent actuator based on single-layer superaligned carbon nanotube sheet and polymer composites. Various advantages of single-layer nanotube sheets including high transparency, considerable conductivity, and ultra-thin dimensions together with selected polymer materials completely realize all the above required advantages. Also, this is the first time that a single-layer nanotube sheet has been used to fabricate actuators with high transparency, avoiding the structural damage to the single-layer nanotube sheet. The transparent actuator shows a transmittance of 72% at the wavelength of 550 nm and bends remarkably with a curvature of 0.41 cm(-1) under a DC voltage for 5 s, demonstrating a significant advance in technological performances compared to previous conventional actuators. To illustrate their great potential usage, a transparent wiper and a humanoid robot "hand" were elaborately designed and fabricated, which initiate a new direction in the development of high-performance invisible robotics and other intelligent applications with transparency.

  12. Transparent actuators and robots based on single-layer superaligned carbon nanotube sheet and polymer composites

    NASA Astrophysics Data System (ADS)

    Chen, Luzhuo; Weng, Mingcen; Zhang, Wei; Zhou, Zhiwei; Zhou, Yi; Xia, Dan; Li, Jiaxin; Huang, Zhigao; Liu, Changhong; Fan, Shoushan

    2016-03-01

    Transparent actuators have been attracting emerging interest recently, as they demonstrate potential applications in the fields of invisible robots, tactical displays, variable-focus lenses, and flexible cellular phones. However, previous technologies did not simultaneously realize macroscopic transparent actuators with advantages of large-shape deformation, low-voltage-driven actuation and fast fabrication. Here, we develop a fast approach to fabricate a high-performance transparent actuator based on single-layer superaligned carbon nanotube sheet and polymer composites. Various advantages of single-layer nanotube sheets including high transparency, considerable conductivity, and ultra-thin dimensions together with selected polymer materials completely realize all the above required advantages. Also, this is the first time that a single-layer nanotube sheet has been used to fabricate actuators with high transparency, avoiding the structural damage to the single-layer nanotube sheet. The transparent actuator shows a transmittance of 72% at the wavelength of 550 nm and bends remarkably with a curvature of 0.41 cm-1 under a DC voltage for 5 s, demonstrating a significant advance in technological performances compared to previous conventional actuators. To illustrate their great potential usage, a transparent wiper and a humanoid robot ``hand'' were elaborately designed and fabricated, which initiate a new direction in the development of high-performance invisible robotics and other intelligent applications with transparency.Transparent actuators have been attracting emerging interest recently, as they demonstrate potential applications in the fields of invisible robots, tactical displays, variable-focus lenses, and flexible cellular phones. However, previous technologies did not simultaneously realize macroscopic transparent actuators with advantages of large-shape deformation, low-voltage-driven actuation and fast fabrication. Here, we develop a fast approach to fabricate a high-performance transparent actuator based on single-layer superaligned carbon nanotube sheet and polymer composites. Various advantages of single-layer nanotube sheets including high transparency, considerable conductivity, and ultra-thin dimensions together with selected polymer materials completely realize all the above required advantages. Also, this is the first time that a single-layer nanotube sheet has been used to fabricate actuators with high transparency, avoiding the structural damage to the single-layer nanotube sheet. The transparent actuator shows a transmittance of 72% at the wavelength of 550 nm and bends remarkably with a curvature of 0.41 cm-1 under a DC voltage for 5 s, demonstrating a significant advance in technological performances compared to previous conventional actuators. To illustrate their great potential usage, a transparent wiper and a humanoid robot ``hand'' were elaborately designed and fabricated, which initiate a new direction in the development of high-performance invisible robotics and other intelligent applications with transparency. Electronic supplementary information (ESI) available: Video records of the actuation process of the transparent wiper and the grabbing-releasing process of the transparent robot ``hand'', transmittance spectra of the PET and BOPP films, the SEM image showing the thickness of the SACNT sheet, calculation of the curvature, calculation of energy efficiency, experimental results of the control experiment, modeling of the SACNT/PET and PET/BOPP composites and experimental results of the repeatability test. See DOI: 10.1039/c5nr07237a

  13. Decentralized Feedback Controllers for Exponential Stabilization of Hybrid Periodic Orbits: Application to Robotic Walking.

    PubMed

    Hamed, Kaveh Akbari; Gregg, Robert D

    2016-07-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.

  14. Decentralized Feedback Controllers for Exponential Stabilization of Hybrid Periodic Orbits: Application to Robotic Walking*

    PubMed Central

    Hamed, Kaveh Akbari; Gregg, Robert D.

    2016-01-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059

  15. Uncanny valley: A preliminary study on the acceptance of Malaysian urban and rural population toward different types of robotic faces

    NASA Astrophysics Data System (ADS)

    Tay, T. T.; Low, Raymond; Loke, H. J.; Chua, Y. L.; Goh, Y. H.

    2018-04-01

    The proliferation of robotic technologies in recent years brings robots closer to humanities. There are many researches on going at various stages of development to bring robots into our homes, schools, nurseries, elderly care centres, offices, hospitals and factories. With recently developed robots having tendency to have appearance which increasingly displaying similarities to household animals and humans, there is a need to study the existence of uncanny valley phenomenon. Generally, the acceptance of people toward robots increases as the robots acquire increasing similarities to human features until a stage where people feel very uncomfortable, eerie, fear and disgust when the robot appearance become almost human like but not yet human. This phenomenon called uncanny valley was first reported by Masahiro Mori. There are numerous researches conducted to measure the existence of uncanny valley in Japan and European countries. However, there is limited research reported on uncanny valley phenomenon in Malaysia so far. In view of the different cultural background and exposure of Malaysian population to robotics technology compared to European or East Asian populations, it is worth to study this phenomenon in Malaysian context. The main aim of this work is to conduct a preliminary study to determine the existence of uncanny valley phenomenon in Malaysian urban and rural populations. It is interesting to find if there are any differences in the acceptance of the two set of populations despite of their differences. Among others the urban and rural populations differ in term of the rate of urbanization and exposure to latest technologies. A set of four interactive robotic faces and an ideal human model representing the fifth robot are used in this study. The robots have features resembling a cute animal, cartoon character, typical robot and human-like. Questionnaire surveys are conducted on respondents from urban and rural populations. Survey data collected are analysed to determine the preferred features in a humanoid robot, the acceptance of respondents toward the robotic faces and the existence of uncanny valley phenomenon. Based on the limited study, it is found that the uncanny valley phenomenon existed in both the Malaysian urban and rural population.

  16. Development of microsized slip sensors using dielectric elastomer for incipient slippage

    NASA Astrophysics Data System (ADS)

    Hwang, Do-Yeon; Kim, Baek-chul; Cho, Han-Jeong; Li, Zhengyuan; Lee, Youngkwan; Nam, Jae-Do; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, J. C.

    2014-04-01

    A humanoid robot hand has received significant attention in various fields of study. In terms of dexterous robot hand, slip detecting tactile sensor is essential to grasping objects safely. Moreover, slip sensor is useful in robotics and prosthetics to improve precise control during manipulation tasks. In this paper, sensor based-human biomimetic structure is fabricated. We reported a resistance tactile sensor that enables to detect a slip on the surface of sensor structure. The resistance slip sensor that the novel developed uses acrylonitrile-butadiene rubber (NBR) as a dielectric substrate and carbon particle as an electrode material. The presented sensor device in this paper has fingerprint-like structures that are similar with the role of the human's finger print. It is possible to measure the slip as the structure of sensor makes a deformation and it changes the resistance through forming a new conductive route. To verify effectiveness of the proposed slip detection, experiment using prototype of resistance slip sensor is conducted with an algorithm to detect slip and slip was successfully detected. In this paper, we will discuss the slip detection properties so four sensor and detection principle.

  17. Information driven self-organization of complex robotic behaviors.

    PubMed

    Martius, Georg; Der, Ralf; Ay, Nihat

    2013-01-01

    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.

  18. MonoSLAM: real-time single camera SLAM.

    PubMed

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  19. Design of biped hip simulator using SolidWorks

    NASA Astrophysics Data System (ADS)

    Zainudin, M. R.; Yahya, A.; Fazli, M. I. M.; Syahrom, A.; Harun, F. K. C.; Nazarudin, M. S.

    2017-10-01

    The increasing number of people who underwent both hip implant surgery based on World Health Organization (WHO) has received massive attention from researchers lately to develop various types of hip simulators in order to test the hip implant. Various number of hip simulator have been developed with different functions and capabilities. This paper presents the design development of biped hip simulator using SolidWorks software by taking into consideration some improvement and modifications. The finite element method is used to test the design whether it is safe to be used or not. The biped hip simulator has been successfully designed and ready to be fabricated as the endurance testing shown a positive results. The von Mises stress induced in the material is an alloy steel which is 2,975,862.3 N/m2 lower than the yield strength. Thus, the design is safe to be used as it obey the safety criterion.

  20. Could robots become authentic companions in nursing care?

    PubMed

    Metzler, Theodore A; Lewis, Lundy M; Pope, Linda C

    2016-01-01

    Creating android and humanoid robots to furnish companionship in the nursing care of older people continues to attract substantial development capital and research. Some people object, though, that machines of this kind furnish human-robot interaction characterized by inauthentic relationships. In particular, robotic and artificial intelligence (AI) technologies have been charged with substituting mindless mimicry of human behaviour for the real presence of conscious caring offered by human nurses. When thus viewed as deceptive, the robots also have prompted corresponding concerns regarding their potential psychological, moral, and spiritual implications for people who will be interacting socially with these machines. The foregoing objections and concerns can be assessed quite differently, depending upon ambient religious beliefs or metaphysical presuppositions. The complaints may be set aside as unnecessary, for example, within religious traditions for which even current robots can be viewed as presenting spiritual aspects. Elsewhere, technological cultures may reject the complaints as expression of outdated superstition, holding that the machines eventually will enjoy a consciousness described entirely in materialist and behaviourist terms. While recognizing such assessments, the authors of this essay propose that the heart of the foregoing objections and concerns may be evaluated, in part, scientifically - albeit with a conclusion recommending fundamental revisions in AI modelling of human mental life. Specifically, considerations now favour introduction of AI models using interactive classical and quantum computation. Without this change, the answer to the essay's title question arguably is 'no' - with it, the answer plausibly becomes 'maybe'. Either outcome holds very interesting implications for nurses. © 2015 John Wiley & Sons Ltd.

  1. Infant discrimination of humanoid robots

    PubMed Central

    Matsuda, Goh; Ishiguro, Hiroshi; Hiraki, Kazuo

    2015-01-01

    Recently, extremely humanlike robots called “androids” have been developed, some of which are already being used in the field of entertainment. In the context of psychological studies, androids are expected to be used in the future as fully controllable human stimuli to investigate human nature. In this study, we used an android to examine infant discrimination ability between human beings and non-human agents. Participants (N = 42 infants) were assigned to three groups based on their age, i.e., 6- to 8-month-olds, 9- to 11-month-olds, and 12- to 14-month-olds, and took part in a preferential looking paradigm. Of three types of agents involved in the paradigm—a human, an android modeled on the human, and a mechanical-looking robot made from the android—two at a time were presented side-by-side as they performed a grasping action. Infants’ looking behavior was measured using an eye tracking system, and the amount of time spent focusing on each of three areas of interest (face, goal, and body) was analyzed. Results showed that all age groups predominantly looked at the robot and at the face area, and that infants aged over 9 months watched the goal area for longer than the body area. There was no difference in looking times and areas focused on between the human and the android. These findings suggest that 6- to 14-month-olds are unable to discriminate between the human and the android, although they can distinguish the mechanical robot from the human. PMID:26441772

  2. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    PubMed Central

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  3. Babybot: a biologically inspired developing robotic agent

    NASA Astrophysics Data System (ADS)

    Metta, Giorgio; Panerai, Francesco M.; Sandini, Giulio

    2000-10-01

    The study of development, either artificial or biological, can highlight the mechanisms underlying learning and adaptive behavior. We shall argue whether developmental studies might provide a different and potentially interesting perspective either on how to build an artificial adaptive agent, or on understanding how the brain solves sensory, motor, and cognitive tasks. It is our opinion that the acquisition of the proper behavior might indeed be facilitated because within an ecological context, the agent, its adaptive structure and the environment dynamically interact thus constraining the otherwise difficult learning problem. In very general terms we shall describe the proposed approach and supporting biological related facts. In order to further analyze these aspects from the modeling point of view, we shall demonstrate how a twelve degrees of freedom baby humanoid robot acquires orienting and reaching behaviors, and what advantages the proposed framework might offer. In particular, the experimental setup consists of five degrees-of-freedom (dof) robot head, and an off-the-shelf six dof robot manipulator, both mounted on a rotating base: i.e. the torso. From the sensory point of view, the robot is equipped with two space-variant cameras, an inertial sensor simulating the vestibular system, and proprioceptive information through motor encoders. The biological parallel is exploited at many implementation levels. It is worth mentioning, for example, the space- variant eyes, exploiting foveal and peripheral vision in a single arrangement, the inertial sensor providing efficient image stabilization (vestibulo-ocular reflex).

  4. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  5. Meeting the challenges--the role of medical informatics in an ageing society.

    PubMed

    Koch, Sabine

    2006-01-01

    The objective of this paper is to identify trends and new technological developments that appear due to an ageing society and to relate them to current research in the field of medical informatics. A survey of the current literature reveals that recent technological advances have been made in the fields of "telecare and home-monitoring", "smart homes and robotics" and "health information systems and knowledge management". Innovative technologies such as wearable devices, bio- and environmental sensors and mobile, humanoid robots do already exist and ambient assistant living environments are being created for an ageing society. However, those technologies have to be adapted to older people's self-care processes and coping strategies, and to support new ways of healthcare delivery. Medical informatics can support this process by providing the necessary information infrastructure, contribute to standardisation, interoperability and security issues and provide modelling and simulation techniques for educational purposes. Research fields of increasing importance with regard to an ageing society are, moreover, the fields of knowledge management, ubiquitous computing and human-computer interaction.

  6. The ITALK project: a developmental robotics approach to the study of individual, social, and linguistic learning.

    PubMed

    Broz, Frank; Nehaniv, Chrystopher L; Belpaeme, Tony; Bisio, Ambra; Dautenhahn, Kerstin; Fadiga, Luciano; Ferrauto, Tomassino; Fischer, Kerstin; Förster, Frank; Gigliotta, Onofrio; Griffiths, Sascha; Lehmann, Hagen; Lohan, Katrin S; Lyon, Caroline; Marocco, Davide; Massera, Gianluca; Metta, Giorgio; Mohan, Vishwanathan; Morse, Anthony; Nolfi, Stefano; Nori, Francesco; Peniak, Martin; Pitsch, Karola; Rohlfing, Katharina J; Sagerer, Gerhard; Sato, Yo; Saunders, Joe; Schillingmann, Lars; Sciutti, Alessandra; Tikhanoff, Vadim; Wrede, Britta; Zeschel, Arne; Cangelosi, Angelo

    2014-07-01

    This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots. Copyright © 2014 Cognitive Science Society, Inc.

  7. Muscle Motion Solenoid Actuator

    NASA Astrophysics Data System (ADS)

    Obata, Shuji

    It is one of our dreams to mechanically recover the lost body for damaged humans. Realistic humanoid robots composed of such machines require muscle motion actuators controlled by all pulling actions. Particularly, antagonistic pairs of bi-articular muscles are very important in animal's motions. A system of actuators is proposed using the electromagnetic force of the solenoids with the abilities of the stroke length over 10 cm and the strength about 20 N, which are needed to move the real human arm. The devised actuators are based on developments of recent modern electro-magnetic materials, where old time materials can not give such possibility. Composite actuators are controlled by a high ability computer and software making genuine motions.

  8. Emotional Expression in Simple Line Drawings of a Robot's Face Leads to Higher Offers in the Ultimatum Game.

    PubMed

    Terada, Kazunori; Takeuchi, Chikara

    2017-01-01

    In the present study, we investigated whether expressing emotional states using a simple line drawing to represent a robot's face can serve to elicit altruistic behavior from humans. An experimental investigation was conducted in which human participants interacted with a humanoid robot whose facial expression was shown on an LCD monitor that was mounted as its head (Study 1). Participants were asked to play the ultimatum game, which is usually used to measure human altruistic behavior. All participants were assigned to be the proposer and were instructed to decide their offer within 1 min by controlling a slider bar. The corners of the robot's mouth, as indicated by the line drawing, simply moved upward, or downward depending on the position of the slider bar. The results suggest that the change in the facial expression depicted by a simple line drawing of a face significantly affected the participant's final offer in the ultimatum game. The offers were increased by 13% when subjects were shown contingent changes of facial expression. The results were compared with an experiment in a teleoperation setting in which participants interacted with another person through a computer display showing the same line drawings used in Study 1 (Study 2). The results showed that offers were 15% higher if participants were shown a contingent facial expression change. Together, Studies 1 and 2 indicate that emotional expression in simple line drawings of a robot's face elicits the same higher offer from humans as a human telepresence does.

  9. Emotional Expression in Simple Line Drawings of a Robot's Face Leads to Higher Offers in the Ultimatum Game

    PubMed Central

    Terada, Kazunori; Takeuchi, Chikara

    2017-01-01

    In the present study, we investigated whether expressing emotional states using a simple line drawing to represent a robot's face can serve to elicit altruistic behavior from humans. An experimental investigation was conducted in which human participants interacted with a humanoid robot whose facial expression was shown on an LCD monitor that was mounted as its head (Study 1). Participants were asked to play the ultimatum game, which is usually used to measure human altruistic behavior. All participants were assigned to be the proposer and were instructed to decide their offer within 1 min by controlling a slider bar. The corners of the robot's mouth, as indicated by the line drawing, simply moved upward, or downward depending on the position of the slider bar. The results suggest that the change in the facial expression depicted by a simple line drawing of a face significantly affected the participant's final offer in the ultimatum game. The offers were increased by 13% when subjects were shown contingent changes of facial expression. The results were compared with an experiment in a teleoperation setting in which participants interacted with another person through a computer display showing the same line drawings used in Study 1 (Study 2). The results showed that offers were 15% higher if participants were shown a contingent facial expression change. Together, Studies 1 and 2 indicate that emotional expression in simple line drawings of a robot's face elicits the same higher offer from humans as a human telepresence does. PMID:28588520

  10. Approach-Phase Precision Landing with Hazard Relative Navigation: Terrestrial Test Campaign Results of the Morpheus/ALHAT Project

    NASA Technical Reports Server (NTRS)

    Crain, Timothy P.; Bishop, Robert H.; Carson, John M., III; Trawny, Nikolas; Hanak, Chad; Sullivan, Jacob; Christian, John; DeMars, Kyle; Campbell, Tom; Getchius, Joel

    2016-01-01

    The Morpheus Project began in late 2009 as an ambitious e ort code-named Project M to integrate three ongoing multi-center NASA technology developments: humanoid robotics, liquid oxygen/liquid methane (LOX/LCH4) propulsion and Autonomous Precision Landing and Hazard Avoidance Technology (ALHAT) into a single engineering demonstration mission to be own to the Moon by 2013. The humanoid robot e ort was redirected to a deploy- ment of Robonaut 2 on the International Space Station in February of 2011 while Morpheus continued as a terrestrial eld test project integrating the existing ALHAT Project's tech- nologies into a sub-orbital ight system using the world's rst LOX/LCH4 main propulsion and reaction control system fed from the same blowdown tanks. A series of 33 tethered tests with the Morpheus 1.0 vehicle and Morpheus 1.5 vehicle were conducted from April 2011 - December 2013 before successful, sustained free ights with the primary Vertical Testbed (VTB) navigation con guration began with Free Flight 3 on December 10, 2013. Over the course of the following 12 free ights and 3 tethered ights, components of the ALHAT navigation system were integrated into the Morpheus vehicle, operations, and ight control loop. The ALHAT navigation system was integrated and run concurrently with the VTB navigation system as a reference and fail-safe option in ight (see touchdown position esti- mate comparisons in Fig. 1). Flight testing completed with Free Flight 15 on December 15, 2014 with a completely autonomous Hazard Detection and Avoidance (HDA), integration of surface relative and Hazard Relative Navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman lter software, and landing within 2 meters of the VTB GPS-based navigation solution at the safe landing site target. This paper describes the Mor- pheus joint VTB/ALHAT navigation architecture, the sensors utilized during the terrestrial ight campaign, issues resolved during testing, and the navigation results from the ight tests.

  11. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  12. Framework and Implications of Virtual Neurorobotics

    PubMed Central

    Goodman, Philip H.; Zou, Quan; Dascalu, Sergiu-Mihai

    2008-01-01

    Despite decades of societal investment in artificial learning systems, truly “intelligent” systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain “algorithm” itself—trying to replicate uniquely “neuromorphic” dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain's interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or “avatars”, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR) in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection) are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications. PMID:18982115

  13. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development.

    PubMed

    Morse, Anthony F; Cangelosi, Angelo

    2017-02-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.

  14. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    PubMed Central

    Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver

    2017-01-01

    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882

  15. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform.

    PubMed

    Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver

    2017-01-01

    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.

  16. KSC-2010-4444

    NASA Image and Video Library

    2010-08-20

    CAPE CANAVERAL, Fla. -- Technicians in the Space Station Processing Facility at NASA's Kennedy Space Center in Florida prepare to load the dexterous humanoid astronaut helper, Robonaut 2, or R2, into the Permanent Multipurpose Module, or PMM. Packed inside a launch box called SLEEPR, or Structural Launch Enclosure to Effectively Protect Robonaut, R2 will be placed in the in the same launch orientation as space shuttle Discovery's STS-133 crew members -- facing toward the nose of the shuttle with the back taking all the weight. Although R2 will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the International Space Station. STS-133 is targeted to launch Nov. 1. Photo credit: NASA/Frankie Martin

  17. Development, fabrication, and modeling of highly sensitive conjugated polymer based piezoresistive sensors in electronic skin applications

    NASA Astrophysics Data System (ADS)

    Khalili, Nazanin; Naguib, Hani E.; Kwon, Roy H.

    2016-04-01

    Human intervention can be replaced through development of tools resulted from utilizing sensing devices possessing a wide range of applications including humanoid robots or remote and minimally invasive surgeries. Similar to the five human senses, sensors interface with their surroundings to stimulate a suitable response or action. The sense of touch which arises in human skin is among the most challenging senses to emulate due to its ultra high sensitivity. This has brought forth novel challenging issues to consider in the field of biomimetic robotics. In this work, using a multiphase reaction, a polypyrrole (PPy) based hydrogel is developed as a resistive type pressure sensor with an intrinsically elastic microstructure stemming from three dimensional hollow spheres. Furthermore, a semi-analytical constriction resistance model accounting for the real contact area between the PPy hydrogel sensors and the electrode along with the dependency of the contact resistance change on the applied load is developed. The model is then solved using a Monte Carlo technique and the sensitivity of the sensor is obtained. The experimental results showed the good tracking ability of the proposed model.

  18. EEG theta and Mu oscillations during perception of human and robot actions

    PubMed Central

    Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.

    2013-01-01

    The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375

  19. EEG theta and Mu oscillations during perception of human and robot actions.

    PubMed

    Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P

    2013-01-01

    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

  20. Posture Affects How Robots and Infants Map Words to Objects

    PubMed Central

    Morse, Anthony F.; Benitez, Viridian L.; Belpaeme, Tony; Cangelosi, Angelo; Smith, Linda B.

    2015-01-01

    For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space. PMID:25785834

  1. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  2. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    PubMed Central

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782

  3. Golden Gait: An Optimization Theory Perspective on Human and Humanoid Walking

    PubMed Central

    Iosa, Marco; Morone, Giovanni; Paolucci, Stefano

    2017-01-01

    Human walking is a complex task which includes hundreds of muscles, bones and joints working together to deliver harmonic movements with the need of finding equilibrium between moving forward and maintaining stability. Many different computational approaches have been used to explain human walking mechanisms, from pendular model to fractal approaches. A new perspective can be gained from using the principles developed in the field of Optimization theory and in particularly the branch of Game Theory. In particular we provide a new insight into human walking showing as the trade-off between advancement and equilibrium managed during walking has the same solution of the Ultimatum game, one of the most famous paradigms of game theory, and this solution is the golden ratio. The golden ratio is an irrational number that was found in many biological and natural systems self-organized in a harmonic, asymmetric, and fractal structure. Recently, the golden ratio has also been found as the equilibrium point between two players involved into the Ultimatum Game. It has been suggested that this result can be due to the fact that the golden ratio is perceived as the fairest asymmetric solution by the two players. The golden ratio is also the most common proportion between stance and swing phase of human walking. This approach may explain the importance of harmony in human walking, and provide new perspectives for developing quantitative assessment of human walking, efficient humanoid robotic walkers, and effective neurorobots for rehabilitation. PMID:29311890

  4. Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person

    PubMed Central

    MINATO, TAKASHI; SHIMADA, MICHIHIRO; ITAKURA, SHOJI; LEE, KANG; ISHIGURO, HIROSHI

    2008-01-01

    Our research goal is to discover the principles underlying natural communication among individuals and to establish a methodology for the development of expressive humanoid robots. For this purpose we have developed androids that closely resemble human beings. The androids enable us to investigate a number of phenomena related to human interaction that could not otherwise be investigated with mechanical-looking robots. This is because more human-like devices are in a better position to elicit the kinds of responses that people direct toward each other. Moreover, we cannot ignore the role of appearance in giving us a subjective impression of human presence or intelligence. However, this impression is influenced by behavior and the complex relationship between appearance and behavior. This paper proposes a hypothesis about how appearance and behavior are related, and maps out a plan for android research to investigate this hypothesis. We then examine a study that evaluates the human likeness of androids according to the gaze behavior they elicit. Studies such as these, which integrate the development of androids with the investigation of human behavior, constitute a new research area that fuses engineering and science. PMID:18985174

  5. Robonaut's Flexible Information Technology Infrastructure

    NASA Technical Reports Server (NTRS)

    Askew, Scott; Bluethmann, William; Alder, Ken; Ambrose, Robert

    2003-01-01

    Robonaut, NASA's humanoid robot, is designed to work as both an astronaut assistant and, in certain situations, an astronaut surrogate. This highly dexterous robot performs complex tasks under telepresence control that could previously only be carried out directly by humans. Currently with 47 degrees of freedom (DOF), Robonaut is a state-of-the-art human size telemanipulator system. while many of Robonaut's embedded components have been custom designed to meet packaging or environmental requirements, the primary computing systems used in Robonaut are currently commercial-off-the-shelf (COTS) products which have some correlation to flight qualified computer systems. This loose coupling of information technology (IT) resources allows Robonaut to exploit cost effective solutions while floating the technology base to take advantage of the rapid pace of IT advances. These IT systems utilize a software development environment, which is both compatible with COTS hardware as well as flight proven computing systems, preserving the majority of software development for a flight system. The ability to use highly integrated and flexible COTS software development tools improves productivity while minimizing redesign for a space flight system. Further, the flexibility of Robonaut's software and communication architecture has allowed it to become a widely used distributed development testbed for integrating new capabilities and furthering experimental research.

  6. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    PubMed

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  7. Spatio-temporal features for tracking and quadruped/biped discrimination

    NASA Astrophysics Data System (ADS)

    Rickman, Rick; Copsey, Keith; Bamber, David C.; Page, Scott F.

    2012-05-01

    Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse and compact spatial feature descriptors and show much potential for defence and security applications. This paper considers the extension of such techniques to include information from the temporal domain, to improve utility in applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are presented, and the relative merits of the approach are discussed.

  8. Two-Armed, Mobile, Sensate Research Robot

    NASA Technical Reports Server (NTRS)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and contains a force-and-torque sensor that provides feedback for force (compliance) control of the arm. The end effector could be a tool or a robot hand, depending on the application.

  9. Development and Deployment of Robonaut 2 to the International Space Station

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.

    2011-01-01

    The development of the Robonaut 2 (R2) system was a joint endeavor with NASA and General Motors, producing robots strong enough to do work, yet safe enough to be trusted to work near humans. To date two R2 units have been produced, designated as R2A and R2B. This follows more than a decade of work on the Robonaut 1 units that produced advances in dexterity, tele-presence, remote supervision across time delay, combining mobility with manipulation, human-robot interaction, force control and autonomous grasping. Design challenges for the R2 included higher speed, smaller packaging, more dexterous fingers, more sensitive perception, soft drivetrain design, and the overall implementation of a system software approach for human safety, At the time of this writing the R2B unit was poised for launch to the International Space Station (ISS) aboard STS-133. R2 will be the first humanoid robot in space, and is arguably the most sophisticated robot in the world, bringing NASA into the 21st century as the world's leader in this field. Joining the other robots already on ISS, the station is now an exciting lab for robot experiments and utilization. A particular challenge for this project has been the design and certification of the robot and its software for work near humans. The 3 layer software systems will be described, and the path to ISS certification will be reviewed. R2 will go through a series of ISS checkout tests during 2011. A taskboard was shipped with the robot that will be used to compare R2B's dexterous manipulation in zero gravity with the ground robot s ability to handle similar objects in Earth s gravity. R2's taskboard has panels with increasingly difficult tasks, starting with switches, progressing to connectors and eventually handling softgoods. The taskboard is modular, and new interfaces and experiments will be built up using equipment already on ISS. Since the objective is to test R2 performing tasks with human interfaces, hardware abounds on ISS and the crew will be involved to help select tasks that are dull, dirty or dangerous. Future plans for R2 include a series of upgrades, evolving from static IVA (Intravehicular Activity) operations, to mobile IVA, then EVA (Extravehicular Activity).

  10. A simple strategy for jumping straight up.

    PubMed

    Hemami, Hooshang; Wyman, Bostwick F

    2012-05-01

    Jumping from a stationary standing position into the air is a transition from a constrained motion in contact with the ground to an unconstrained system not in contact with the ground. A simple case of the jump, as it applies to humans, robots and humanoids, is studied in this paper. The dynamics of the constrained rigid body are expanded to define a larger system that accommodates the jump. The formulation is applied to a four-link, three-dimensional system in order to articulate the ballistic motion involved. The activity of the muscular system and the role of the major sagittal muscle groups are demonstrated. The control strategy, involving state feedback and central feed forward signals, is formulated and computer simulations are presented to assess the feasibility of the formulations, the strategy and the jump. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Design and simulation of a cable-pulley-based transmission for artificial ankle joints

    NASA Astrophysics Data System (ADS)

    Liu, Huaxin; Ceccarelli, Marco; Huang, Qiang

    2016-06-01

    In this paper, a mechanical transmission based on cable pulley is proposed for human-like actuation in the artificial ankle joints of human-scale. The anatomy articular characteristics of the human ankle is discussed for proper biomimetic inspiration in designing an accurate, efficient, and robust motion control of artificial ankle joint devices. The design procedure is presented through the inclusion of conceptual considerations and design details for an interactive solution of the transmission system. A mechanical design is elaborated for the ankle joint angular with pitch motion. A multi-body dynamic simulation model is elaborated accordingly and evaluated numerically in the ADAMS environment. Results of the numerical simulations are discussed to evaluate the dynamic performance of the proposed design solution and to investigate the feasibility of the proposed design in future applications for humanoid robots.

  12. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  13. Model-based reinforcement learning with dimension reduction.

    PubMed

    Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi

    2016-12-01

    The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Quantifying dynamic characteristics of human walking for comprehensive gait cycle.

    PubMed

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H

    2013-09-01

    Normal human walking typically consists of phases during which the body is statically unbalanced while maintaining dynamic stability. Quantifying the dynamic characteristics of human walking can provide better understanding of gait principles. We introduce a novel quantitative index, the dynamic gait measure (DGM), for comprehensive gait cycle. The DGM quantifies the effects of inertia and the static balance instability in terms of zero-moment point and ground projection of center of mass and incorporates the time-varying foot support region (FSR) and the threshold between static and dynamic walking. Also, a framework of determining the DGM from experimental data is introduced, in which the gait cycle segmentation is further refined. A multisegmental foot model is integrated into a biped system to reconstruct the walking motion from experiments, which demonstrates the time-varying FSR for different subphases. The proof-of-concept results of the DGM from a gait experiment are demonstrated. The DGM results are analyzed along with other established features and indices of normal human walking. The DGM provides a measure of static balance instability of biped walking during each (sub)phase as well as the entire gait cycle. The DGM of normal human walking has the potential to provide some scientific insights in understanding biped walking principles, which can also be useful for their engineering and clinical applications.

  15. Using the MEDiPORT humanoid robot to reduce procedural pain and distress in children with cancer: A pilot randomized controlled trial.

    PubMed

    Jibb, Lindsay A; Birnie, Kathryn A; Nathan, Paul C; Beran, Tanya N; Hum, Vanessa; Victor, J Charles; Stinson, Jennifer N

    2018-06-12

    Subcutaneous port needle insertions are painful and distressing for children with cancer. The interactive MEDiPORT robot has been programmed to implement psychological strategies to decrease pain and distress during this procedure. This study assessed the feasibility of a future MEDiPORT trial. The secondary aim was to determine the preliminary effectiveness of MEDiPORT in reducing child pain and distress during subcutaneous port accesses. This 5-month pilot randomized controlled trial used a web-based service to randomize 4- to 9-year-olds with cancer to the MEDiPORT cognitive-behavioral arm (robot using evidence-based cognitive-behavioral interventions) or active distraction arm (robot dancing and singing) while a nurse conducted a needle insertion. We assessed accrual and retention; technical difficulties; outcome measure completion by children, parents, and nurses; time taken to complete the study and clinical procedure; and child-, parent-, and nurse-rated acceptability. Descriptive analyses, with exploratory inferential testing of child pain and distress data, were used to address study aims. Forty children were randomized across study arms. Most (85%) eligible children participated and no children withdrew. Technical difficulties were more common in the cognitive-behavioral arm. Completion times for the study and needle insertion were acceptable and >96% of outcome measure items were completed. Overall, MEDiPORT and the study were acceptable to participants. There was no difference in pain between arms, but distress during the procedure was less pronounced in the active distraction arm. The MEDiPORT study appears feasible to implement as an adequately-powered effectiveness-assessing trial following modifications to the intervention and study protocol. ClinicalTrials.gov NCT02611739. © 2018 Wiley Periodicals, Inc.

  16. Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition

    PubMed Central

    Lallee, Stephane; Madden, Carol; Hoen, Michel; Dominey, Peter Ford

    2010-01-01

    The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robot's ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action. PMID:20577629

  17. Synergy-Based Bilateral Port: A Universal Control Module for Tele-Manipulation Frameworks Using Asymmetric Master–Slave Systems

    PubMed Central

    Brygo, Anais; Sarakoglou, Ioannis; Grioli, Giorgio; Tsagarakis, Nikos

    2017-01-01

    Endowing tele-manipulation frameworks with the capability to accommodate a variety of robotic hands is key to achieving high performances through permitting to flexibly interchange the end-effector according to the task considered. This requires the development of control policies that not only cope with asymmetric master–slave systems but also whose high-level components are designed in a unified space in abstraction from the devices specifics. To address this dual challenge, a novel synergy port is developed that resolves the kinematic, sensing, and actuation asymmetries of the considered system through generating motion and force feedback references in the hardware-independent hand postural synergy space. It builds upon the concept of the Cartesian-based synergy matrix, which is introduced as a tool mapping the fingertips Cartesian space to the directions oriented along the grasp principal components. To assess the effectiveness of the proposed approach, the synergy port has been integrated into the control system of a highly asymmetric tele-manipulation framework, in which the 3-finger hand exoskeleton HEXOTRAC is used as a master device to control the SoftHand, a robotic hand whose transmission system relies on a single motor to drive all joints along a soft synergistic path. The platform is further enriched with the vision-based motion capture system Optitrack to monitor the 6D trajectory of the user’s wrist, which is used to control the robotic arm on which the SoftHand is mounted. Experiments have been conducted with the humanoid robot COMAN and the KUKA LWR robotic manipulator. Results indicate that this bilateral interface is highly intuitive and allows users with no prior experience to reach, grasp, and transport a variety of objects exhibiting very different shapes and impedances. In addition, the hardware and control solutions proved capable of accommodating users with different hand kinematics. Finally, the proposed control framework offers a universal, flexible, and intuitive interface allowing for the performance of effective tele-manipulations. PMID:28421179

  18. Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot

    PubMed Central

    Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki

    2018-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389

  19. Synergy-Based Bilateral Port: A Universal Control Module for Tele-Manipulation Frameworks Using Asymmetric Master-Slave Systems.

    PubMed

    Brygo, Anais; Sarakoglou, Ioannis; Grioli, Giorgio; Tsagarakis, Nikos

    2017-01-01

    Endowing tele-manipulation frameworks with the capability to accommodate a variety of robotic hands is key to achieving high performances through permitting to flexibly interchange the end-effector according to the task considered. This requires the development of control policies that not only cope with asymmetric master-slave systems but also whose high-level components are designed in a unified space in abstraction from the devices specifics. To address this dual challenge, a novel synergy port is developed that resolves the kinematic, sensing, and actuation asymmetries of the considered system through generating motion and force feedback references in the hardware-independent hand postural synergy space. It builds upon the concept of the Cartesian-based synergy matrix, which is introduced as a tool mapping the fingertips Cartesian space to the directions oriented along the grasp principal components. To assess the effectiveness of the proposed approach, the synergy port has been integrated into the control system of a highly asymmetric tele-manipulation framework, in which the 3-finger hand exoskeleton HEXOTRAC is used as a master device to control the SoftHand, a robotic hand whose transmission system relies on a single motor to drive all joints along a soft synergistic path. The platform is further enriched with the vision-based motion capture system Optitrack to monitor the 6D trajectory of the user's wrist, which is used to control the robotic arm on which the SoftHand is mounted. Experiments have been conducted with the humanoid robot COMAN and the KUKA LWR robotic manipulator. Results indicate that this bilateral interface is highly intuitive and allows users with no prior experience to reach, grasp, and transport a variety of objects exhibiting very different shapes and impedances. In addition, the hardware and control solutions proved capable of accommodating users with different hand kinematics. Finally, the proposed control framework offers a universal, flexible, and intuitive interface allowing for the performance of effective tele-manipulations.

  20. Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.

    PubMed

    Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki

    2018-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.

  1. Radiation phantom with humanoid shape and adjustable thickness

    DOEpatents

    Lehmann, Joerg [Pleasanton, CA; Levy, Joshua [Salem, NY; Stern, Robin L [Lodi, CA; Siantar, Christine Hartmann [Livermore, CA; Goldberg, Zelanna [Carmichael, CA

    2006-12-19

    A radiation phantom comprising a body with a general humanoid shape and at least a portion having an adjustable thickness. In one embodiment, the portion with an adjustable thickness comprises at least one tissue-equivalent slice.

  2. Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps

    PubMed Central

    Bowen, Chris; Ye, Gu; Alterovitz, Ron

    2015-01-01

    In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642

  3. KSC-2012-4343

    NASA Image and Video Library

    2012-08-09

    CAPE CANAVERAL, Fla. – At the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida, the Morpheus prototype lander begins to lift off of the ground during a free-flight test. Testing of the prototype lander had been ongoing at NASA’s Johnson Space Center in Houston in preparation for its first free-flight test at Kennedy Space Center. Morpheus was manufactured and assembled at JSC and Armadillo Aerospace. Morpheus is large enough to carry 1,100 pounds of cargo to the moon – for example, a humanoid robot, a small rover, or a small laboratory to convert moon dust into oxygen. The primary focus of the test is to demonstrate an integrated propulsion and guidance, navigation and control system that can fly a lunar descent profile to exercise the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, safe landing sensors and closed-loop flight control. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA

  4. Implementation of Push Recovery Strategy Using Triple Linear Inverted Pendulum Model in “T-FloW” Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Dimas Pristovani, R.; Raden Sanggar, D.; Dadet, Pramadihanto.

    2018-04-01

    Push recovery is one of humanbehaviorwhich is a strategy to defend the body from anexternal force in any environment. This paper describes push recovery strategy which usesMIMO decoupled control system method. The dynamics system uses aquasi-dynamic system based on triple linear inverted pendulum model (TLIPM). The analysis of TLIPMuses zero moment point (ZMP) calculation from ZMP simplification in last research. By using this simplification of dynamics system, the control design can be simplified into 3 serial SISOwith known and uncertain disturbance models in each inverted pendulum. Each pendulum has different plan to damp the external force effect. In this experiment, PID controller (closed- loop)is used to arrange the damp characteristic.The experiment result shows thatwhen using push recovery control strategy (closed-loop control) is about 85.71% whilewithout using push recovery control strategy (open-loop control) it is about 28.57%.

  5. Dynamically variable negative stiffness structures.

    PubMed

    Churchill, Christopher B; Shahan, David W; Smith, Sloan P; Keefe, Andrew C; McKnight, Geoffrey P

    2016-02-01

    Variable stiffness structures that enable a wide range of efficient load-bearing and dexterous activity are ubiquitous in mammalian musculoskeletal systems but are rare in engineered systems because of their complexity, power, and cost. We present a new negative stiffness-based load-bearing structure with dynamically tunable stiffness. Negative stiffness, traditionally used to achieve novel response from passive structures, is a powerful tool to achieve dynamic stiffness changes when configured with an active component. Using relatively simple hardware and low-power, low-frequency actuation, we show an assembly capable of fast (<10 ms) and useful (>100×) dynamic stiffness control. This approach mitigates limitations of conventional tunable stiffness structures that exhibit either small (<30%) stiffness change, high friction, poor load/torque transmission at low stiffness, or high power active control at the frequencies of interest. We experimentally demonstrate actively tunable vibration isolation and stiffness tuning independent of supported loads, enhancing applications such as humanoid robotic limbs and lightweight adaptive vibration isolators.

  6. Predictive Coding Strategies for Developmental Neurorobotics

    PubMed Central

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  7. A novel approach to locomotion learning: Actor-Critic architecture using central pattern generators and dynamic motor primitives.

    PubMed

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2014-01-01

    In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a "reshaping" function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal "reshaping" functions). In this article, we use this architecture with the actor-critic algorithms for finding a good "reshaping" function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion.

  8. A novel approach to locomotion learning: Actor-Critic architecture using central pattern generators and dynamic motor primitives

    PubMed Central

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2014-01-01

    In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a “reshaping” function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal “reshaping” functions). In this article, we use this architecture with the actor-critic algorithms for finding a good “reshaping” function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion. PMID:25324773

  9. Analysis on the Load Carrying Mechanism Integrated as Heterogeneous Co-operative Manipulator in a Walking Wheelchair

    NASA Astrophysics Data System (ADS)

    Rajay Vedaraj, I. S.; Jain, Ritika; Rao, B. V. A.

    2014-07-01

    After industrial robots came into existence during 1960, the technology of robotics with the design and analysis of robots in various forms in industries as well as in domestic applications were developed. Nowadays, along with the automotive sector the robots are producing a great impact in the form of quality and production rate to register their existence reliable in various other sectors also. Robotic technology has undergone various phase translations from being tortured as humanoids to the present day manipulators. Depending upon the various forms of its existence, robot manipulators are designed as serial manipulators and parallel manipulators. Individually both types can be proved effective though both have various drawbacks in design and the kinematic analysis. The versatility of robots can be increased by making them work in an environment where the same work volume is shared by more than one manipulator. This work volume can be identified as co-operative work volume of those manipulators. Here the interference of manipulators in the work volume of other manipulators is possible and is made obstacle free. The main advantage of co-operative manipulators is that when a number of independent manipulators are put together in a cooperative work envelope the efficiency and ability to perform tasks is greatly enhanced. The main disadvantage of the co-operative manipulators lies in the complication of its design even for a simple application, in almost all fields. In this paper, a cooperative design of robot manipulators to work in co-operative work environment is done and analysed for its efficacy. In the industrial applications when robotic manipulators are put together in more numbers, the trajectory planning becomes the tough task in the work cell. Proper design can remove the design defects of the cooperative manipulators and can be utilized in a more efficient way. In the proposed research paper an analysis is made on such a type of cooperative manipulator used for climbing stairs with three leg design and anlaysis were also done on the mechanism integrated to the system. Kinematics of the legs are analysed separately and the legs are designed to carry a maximum of 175kgs, which is sustained by the center leg and shared by the dual wing legs equally during the walking phase. In the proposed design, screwjack mechanism is used as the central leg to share the load and thus the analysis on the load sharing capability of the whole system is analysed and concluded in terms of failure modes.

  10. Fronto-parietal coding of goal-directed actions performed by artificial agents.

    PubMed

    Kupferberg, Aleksandra; Iacoboni, Marco; Flanagin, Virginia; Huber, Markus; Kasparbauer, Anna; Baumgartner, Thomas; Hasler, Gregor; Schmidt, Florian; Borst, Christoph; Glasauer, Stefan

    2018-03-01

    With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment. © 2017 Wiley Periodicals, Inc.

  11. Performance and Usability of Various Robotic Arm Control Modes from Human Force Signals

    PubMed Central

    Mick, Sébastien; Cattaert, Daniel; Paclet, Florent; Oudeyer, Pierre-Yves; de Rugy, Aymar

    2017-01-01

    Elaborating an efficient and usable mapping between input commands and output movements is still a key challenge for the design of robotic arm prostheses. In order to address this issue, we present and compare three different control modes, by assessing them in terms of performance as well as general usability. Using an isometric force transducer as the command device, these modes convert the force input signal into either a position or a velocity vector, whose magnitude is linearly or quadratically related to force input magnitude. With the robotic arm from the open source 3D-printed Poppy Humanoid platform simulating a mobile prosthesis, an experiment was carried out with eighteen able-bodied subjects performing a 3-D target-reaching task using each of the three modes. The subjects were given questionnaires to evaluate the quality of their experience with each mode, providing an assessment of their global usability in the context of the task. According to performance metrics and questionnaire results, velocity control modes were found to perform better than position control mode in terms of accuracy and quality of control as well as user satisfaction and comfort. Subjects also seemed to favor quadratic velocity control over linear (proportional) velocity control, even if these two modes did not clearly distinguish from one another when it comes to performance and usability assessment. These results highlight the need to take into account user experience as one of the key criteria for the design of control modes intended to operate limb prostheses. PMID:29118699

  12. Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale

    PubMed Central

    Normand, Jean-Marie; Sanchez-Vives, Maria V.; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A.; Slater, Mel

    2012-01-01

    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale. PMID:23118987

  13. Beaming into the rat world: enabling real-time interaction between rat and human each at their own scale.

    PubMed

    Normand, Jean-Marie; Sanchez-Vives, Maria V; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A; Slater, Mel

    2012-01-01

    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.

  14. Effects of Grip-Force, Contact, and Acceleration Feedback on a Teleoperated Pick-and-Place Task.

    PubMed

    Khurshid, Rebecca P; Fitter, Naomi T; Fedalei, Elizabeth A; Kuchenbecker, Katherine J

    2017-01-01

    The multifaceted human sense of touch is fundamental to direct manipulation, but technical challenges prevent most teleoperation systems from providing even a single modality of haptic feedback, such as force feedback. This paper postulates that ungrounded grip-force, fingertip-contact-and-pressure, and high-frequency acceleration haptic feedback will improve human performance of a teleoperated pick-and-place task. Thirty subjects used a teleoperation system consisting of a haptic device worn on the subject's right hand, a remote PR2 humanoid robot, and a Vicon motion capture system to move an object to a target location. Each subject completed the pick-and-place task 10 times under each of the eight haptic conditions obtained by turning on and off grip-force feedback, contact feedback, and acceleration feedback. To understand how object stiffness affects the utility of the feedback, half of the subjects completed the task with a flexible plastic cup, and the others used a rigid plastic block. The results indicate that the addition of grip-force feedback with gain switching enables subjects to hold both the flexible and rigid objects more stably, and it also allowed subjects who manipulated the rigid block to hold the object more delicately and to better control the motion of the remote robot's hand. Contact feedback improved the ability of subjects who manipulated the flexible cup to move the robot's arm in space, but it deteriorated this ability for subjects who manipulated the rigid block. Contact feedback also caused subjects to hold the flexible cup less stably, but the rigid block more securely. Finally, adding acceleration feedback slightly improved the subject's performance when setting the object down, as originally hypothesized; interestingly, it also allowed subjects to feel vibrations produced by the robot's motion, causing them to be more careful when completing the task. This study supports the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and motivates further improvements to fingertip-contact-and-pressure feedback.

  15. Crew Restraint Design for the International Space Station

    NASA Technical Reports Server (NTRS)

    Norris, Lena; Holden, Kritina; Whitmore, Mihriban

    2006-01-01

    With permanent human presence onboard the International Space Station (ISS), crews will be living and working in microgravity, dealing with the challenges of a weightless environment. In addition, the confined nature of the spacecraft environment results in ergonomic challenges such as limited visibility and access to the activity areas, as well as prolonged periods of unnatural postures. Without optimum restraints, crewmembers may be handicapped for performing some of the on-orbit tasks. Currently, many of the tasks on ISS are performed with the crew restrained merely by hooking their arms or toes around handrails to steady themselves. This is adequate for some tasks, but not all. There have been some reports of discomfort/calluses on the top of the toes. In addition, this type of restraint is simply insufficient for tasks that require a large degree of stability. Glovebox design is a good example of a confined workstation concept requiring stability for successful use. They are widely used in industry, university, and government laboratories, as well as in the space environment, and are known to cause postural limitations and visual restrictions. Although there are numerous guidelines pertaining to ventilation, seals, and glove attachment, most of the data have been gathered in a 1-g environment, or are from studies that were conducted prior to the early 1980 s. Little is known about how best to restrain a crewmember using a glovebox in microgravity. Another ISS task that requires special consideration with respect to restraints is robotic teleoperation. The Robot Systems Technology Branch at the NASA Johnson Space Center is developing a humanoid robot astronaut, or Robonaut. It is being designed to perform extravehicular activities (EVAs) in the hazardous environment of space. An astronaut located inside the ISS will remotely operate Robonaut through a telepresence control system. Essentially, the robot mimics every move the operator makes. This requires the operator to be stable enough to prevent inadvertent movements, while allowing the flexibility to accomplish the controlled movements of the robot. Some type of special purpose restraint will be required to operate Robonaut and similar devices.

  16. Stride lengths, speed and energy costs in walking of Australopithecus afarensis: using evolutionary robotics to predict locomotion of early human ancestors

    PubMed Central

    Sellers, William I; Cain, Gemma M; Wang, Weijie; Crompton, Robin H

    2005-01-01

    This paper uses techniques from evolutionary robotics to predict the most energy-efficient upright walking gait for the early human relative Australopithecus afarensis, based on the proportions of the 3.2 million year old AL 288-1 ‘Lucy’ skeleton, and matches predictions against the nearly contemporaneous (3.5–3.6 million year old) Laetoli fossil footprint trails. The technique creates gaits de novo and uses genetic algorithm optimization to search for the most efficient patterns of simulated muscular contraction at a variety of speeds. The model was first verified by predicting gaits for living human subjects, and comparing costs, stride lengths and speeds to experimentally determined values for the same subjects. Subsequent simulations for A. afarensis yield estimates of the range of walking speeds from 0.6 to 1.3 m s−1 at a cost of 7.0 J kg−1 m−1 for the lowest speeds, falling to 5.8 J kg−1 m−1 at 1.0 m s−1, and rising to 6.2 J kg−1 m−1 at the maximum speed achieved. Speeds previously estimated for the makers of the Laetoli footprint trails (0.56 or 0.64 m s−1 for Trail 1, 0.72 or 0.75 m s−1 for Trail 2/3) may have been underestimated, substantially so for Trail 2/3, with true values in excess of 0.7 and 1.0 m s−1, respectively. The predictions conflict with suggestions that A. afarensis used a ‘shuffling’ gait, indicating rather that the species was a fully competent biped. PMID:16849203

  17. Proceeding of human exoskeleton technology and discussions on future research

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xie, Hanxing; Li, Weilin; Yao, Zheng

    2014-05-01

    After more than half a century of intense efforts, the development of exoskeleton has seen major advances, and several remarkable achievements have been made. Reviews of developing history of exoskeleton are presented, both in active and passive categories. Major models are introduced, and typical technologies are commented on. Difficulties in control algorithm, driver system, power source, and man-machine interface are discussed. Current researching routes and major developing methods are mapped and critically analyzed, and in the process, some key problems are revealed. First, the exoskeleton is totally different from biped robot, and relative studies based on the robot technologies are considerably incorrect. Second, biomechanical studies are only used to track the motion of the human body, the interaction between human and machines are seldom studied. Third, the traditional developing ways which focused on servo-controlling have inborn deficiency from making portable systems. Research attention should be shifted to the human side of the coupling system, and the human ability to learn and adapt should play a more significant role in the control algorithms. Having summarized the major difficulties, possible future works are discussed. It is argued that, since a distinct boundary cannot be drawn in such strong-coupling human-exoskeleton system, the more complex the control system gets, the more difficult it is for the user to learn to use. It is suggested that the exoskeleton should be treated as a simple wearable tool, and downgrading its automatic level may be a change toward a brighter research outlook. This effort at simplification is definitely not easy, as it necessitates theoretical supports from fields such as biomechanics, ergonomics, and bionics.

  18. Four-Component Catalytic Machinery: Reversible Three-State Control of Organocatalysis by Walking Back and Forth on a Track.

    PubMed

    Mittal, Nikita; Özer, Merve S; Schmittel, Michael

    2018-04-02

    A three-component supramolecular walker system is presented where a two-footed ligand (biped) walks back and forth on a tetrahedral 3D track upon the addition and removal of copper(I) ions, respectively. The addition of N-methylpyrrolidine as a catalyst to the walker system generates a four-component catalytic machinery, which acts as a three-state switchable catalytic ensemble in the presence of substrates for a conjugate addition. The copper(I)-ion-initiated walking process of the biped ligand on the track regulates the catalytic activity in three steps: ON versus int ON (intermediate ON) versus OFF. To establish the operation of the four-component catalytic machinery in a mixture of all constituents, forward and backward cycles were performed in situ illustrating that both the walking process and catalytic action are fully reversible and reproducible.

  19. A constriction resistance model of conjugated polymer based piezoresistive sensors for electronic skin applications.

    PubMed

    Khalili, N; Naguib, H E; Kwon, R H

    2016-05-14

    Human intervention can be replaced through the development of tools resulting from utilization of sensing devices possessing a wide range of applications including humanoid robots or remote and minimally invasive surgeries. Similar to the five human senses, sensors interface with their surroundings to stimulate a suitable response or action. The sense of touch which arises in human skin is among the most challenging senses to emulate due to its ultra high sensitivity. This has brought forth novel challenging issues to consider in the field of biomimetic robotics. In this work, using a multiphase reaction, a polypyrrole (PPy) based hydrogel is developed as a resistive type pressure sensor with an intrinsically elastic microstructure stemming from three dimensional hollow spheres. It is shown that the electrical conductivity of the fabricated PPy based piezoresistive sensors is enhanced as a result of adding conductive fillers and therefore, endowing the sensors with a higher sensitivity. A semi-analytical constriction resistance based model accounting for the real contact area between the PPy hydrogel sensors and the electrode along with the dependency of the contact resistance change on the applied load is developed. The model is then solved using a Monte Carlo technique and its corresponding sensitivity is obtained. Comparing the results with their experimental counterparts, the proposed modeling methodology offers a good tracking ability.

  20. Adaptive training of cortical feature maps for a robot sensorimotor controller.

    PubMed

    Adams, Samantha V; Wennekers, Thomas; Denham, Sue; Culverhouse, Phil F

    2013-08-01

    This work investigates self-organising cortical feature maps (SOFMs) based upon the Kohonen Self-Organising Map (SOM) but implemented with spiking neural networks. In future work, the feature maps are intended as the basis for a sensorimotor controller for an autonomous humanoid robot. Traditional SOM methods require some modifications to be useful for autonomous robotic applications. Ideally the map training process should be self-regulating and not require predefined training files or the usual SOM parameter reduction schedules. It would also be desirable if the organised map had some flexibility to accommodate new information whilst preserving previous learnt patterns. Here methods are described which have been used to develop a cortical motor map training system which goes some way towards addressing these issues. The work is presented under the general term 'Adaptive Plasticity' and the main contribution is the development of a 'plasticity resource' (PR) which is modelled as a global parameter which expresses the rate of map development and is related directly to learning on the afferent (input) connections. The PR is used to control map training in place of a traditional learning rate parameter. In conjunction with the PR, random generation of inputs from a set of exemplar patterns is used rather than predefined datasets and enables maps to be trained without deciding in advance how much data is required. An added benefit of the PR is that, unlike a traditional learning rate, it can increase as well as decrease in response to the demands of the input and so allows the map to accommodate new information when the inputs are changed during training. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Predictive Mechanisms Are Not Involved the Same Way during Human-Human vs. Human-Machine Interactions: A Review

    PubMed Central

    Sahaï, Aïsha; Pacherie, Elisabeth; Grynszpan, Ouriel; Berberian, Bruno

    2017-01-01

    Nowadays, interactions with others do not only involve human peers but also automated systems. Many studies suggest that the motor predictive systems that are engaged during action execution are also involved during joint actions with peers and during other human generated action observation. Indeed, the comparator model hypothesis suggests that the comparison between a predicted state and an estimated real state enables motor control, and by a similar functioning, understanding and anticipating observed actions. Such a mechanism allows making predictions about an ongoing action, and is essential to action regulation, especially during joint actions with peers. Interestingly, the same comparison process has been shown to be involved in the construction of an individual's sense of agency, both for self-generated and observed other human generated actions. However, the implication of such predictive mechanisms during interactions with machines is not consensual, probably due to the high heterogeneousness of the automata used in the experimentations, from very simplistic devices to full humanoid robots. The discrepancies that are observed during human/machine interactions could arise from the absence of action/observation matching abilities when interacting with traditional low-level automata. Consistently, the difficulties to build a joint agency with this kind of machines could stem from the same problem. In this context, we aim to review the studies investigating predictive mechanisms during social interactions with humans and with automated artificial systems. We will start by presenting human data that show the involvement of predictions in action control and in the sense of agency during social interactions. Thereafter, we will confront this literature with data from the robotic field. Finally, we will address the upcoming issues in the field of robotics related to automated systems aimed at acting as collaborative agents. PMID:29081744

  2. A novel Morse code-inspired method for multiclass motor imagery brain-computer interface (BCI) design.

    PubMed

    Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen

    2015-11-01

    Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Phylogenetic relationships among amphisbaenian reptiles based on complete mitochondrial genomic sequences.

    PubMed

    Macey, J Robert; Papenfuss, Theodore J; Kuehl, Jennifer V; Fourcade, H Mathew; Boore, Jeffrey L

    2004-10-01

    Complete mitochondrial genomic sequences are reported from 12 members in the four families of the reptile group Amphisbaenia. Analysis of 11,946 aligned nucleotide positions (5797 informative) produces a robust phylogenetic hypothesis. The family Rhineuridae is basal and Bipedidae is the sister taxon to the Amphisbaenidae plus Trogonophidae. Amphisbaenian reptiles are surprisingly old, predating the breakup of Pangaea 200 million years before present, because successive basal taxa (Rhineuridae and Bipedidae) are situated in tectonic regions of Laurasia and nested taxa (Amphisbaenidae and Trogonophidae) are found in Gondwanan regions. Thorough sampling within the Bipedidae shows that it is not tectonic movement of Baja California away from the Mexican mainland that is primary in isolating Bipes species, but rather that primary vicariance occurred between northern and southern groups. Amphisbaenian families show parallel reduction in number of limbs and Bipes species exhibit parallel reduction in number of digits. A measure is developed for comparing the phylogenetic information content of various genes. A synapomorphic trait defining the Bipedidae is a shift from the typical vertebrate mitochondrial gene arrangement to the derived state of trnE and nad6. In addition, a tandem duplication of trnT and trnP is observed in Bipes biporus with a pattern of pseudogene formation that varies among populations. The first case of convergent rearrangement of the mitochondrial genome among animals demonstrated by complete genomic sequences is reported. Relative to most vertebrates, the Rhineuridae has the block nad6, trnE switched in order with the block cob, trnT, trnP, as they are in birds.

  4. Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †

    PubMed Central

    Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco

    2016-01-01

    Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394

  5. Reaching for the Unreachable: Reorganization of Reaching with Walking

    PubMed Central

    Grzyb, Beata J.; Smith, Linda B.; del Pobil, Angel P.

    2015-01-01

    Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared nonwalkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Nonwalkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking. PMID:26110046

  6. Fall Down Detection Under Smart Home System.

    PubMed

    Juang, Li-Hong; Wu, Ming-Ni

    2015-10-01

    Medical technology makes an inevitable trend for the elderly population, therefore the intelligent home care is an important direction for science and technology development, in particular, elderly in-home safety management issues become more and more important. In this research, a low of operation algorithm and using the triangular pattern rule are proposed, then can quickly detect fall-down movements of humanoid by the installation of a robot with camera vision at home that will be able to judge the fall-down movements of in-home elderly people in real time. In this paper, it will present a preliminary design and experimental results of fall-down movements from body posture that utilizes image pre-processing and three triangular-mass-central points to extract the characteristics. The result shows that the proposed method would adopt some characteristic value and the accuracy can reach up to 90 % for a single character posture. Furthermore the accuracy can be up to 100 % when a continuous-time sampling criterion and support vector machine (SVM) classifier are used.

  7. Dynamically variable negative stiffness structures

    PubMed Central

    Churchill, Christopher B.; Shahan, David W.; Smith, Sloan P.; Keefe, Andrew C.; McKnight, Geoffrey P.

    2016-01-01

    Variable stiffness structures that enable a wide range of efficient load-bearing and dexterous activity are ubiquitous in mammalian musculoskeletal systems but are rare in engineered systems because of their complexity, power, and cost. We present a new negative stiffness–based load-bearing structure with dynamically tunable stiffness. Negative stiffness, traditionally used to achieve novel response from passive structures, is a powerful tool to achieve dynamic stiffness changes when configured with an active component. Using relatively simple hardware and low-power, low-frequency actuation, we show an assembly capable of fast (<10 ms) and useful (>100×) dynamic stiffness control. This approach mitigates limitations of conventional tunable stiffness structures that exhibit either small (<30%) stiffness change, high friction, poor load/torque transmission at low stiffness, or high power active control at the frequencies of interest. We experimentally demonstrate actively tunable vibration isolation and stiffness tuning independent of supported loads, enhancing applications such as humanoid robotic limbs and lightweight adaptive vibration isolators. PMID:26989771

  8. Alaska Athabascan stellar astronomy

    NASA Astrophysics Data System (ADS)

    Cannon, Christopher M.

    2014-01-01

    Stellar astronomy is a fundamental component of Alaska Athabascan cultures that facilitates time-reckoning, navigation, weather forecasting, and cosmology. Evidence from the linguistic record suggests that a group of stars corresponding to the Big Dipper is the only widely attested constellation across the Northern Athabascan languages. However, instruction from expert Athabascan consultants shows that the correlation of these names with the Big Dipper is only partial. In Alaska Gwich'in, Ahtna, and Upper Tanana languages the Big Dipper is identified as one part of a much larger circumpolar humanoid constellation that spans more than 133 degrees across the sky. The Big Dipper is identified as a tail, while the other remaining asterisms within the humanoid constellation are named using other body part terms. The concept of a whole-sky humanoid constellation provides a single unifying system for mapping the night sky, and the reliance on body-part metaphors renders the system highly mnemonic. By recognizing one part of the constellation the stargazer is immediately able to identify the remaining parts based on an existing mental map of the human body. The circumpolar position of a whole-sky constellation yields a highly functional system that facilitates both navigation and time-reckoning in the subarctic. Northern Athabascan astronomy is not only much richer than previously described; it also provides evidence for a completely novel and previously undocumented way of conceptualizing the sky---one that is unique to the subarctic and uniquely adapted to northern cultures. The concept of a large humanoid constellation may be widespread across the entire subarctic and have great antiquity. In addition, the use of cognate body part terms describing asterisms within humanoid constellations is similarly found in Navajo, suggesting a common ancestor from which Northern and Southern Athabascan stellar naming strategies derived.

  9. Compact and low-cost humanoid hand powered by nylon artificial muscles.

    PubMed

    Wu, Lianjun; Jung de Andrade, Monica; Saharan, Lokesh Kumar; Rome, Richard Steven; Baughman, Ray H; Tadesse, Yonas

    2017-02-03

    This paper focuses on design, fabrication and characterization of a biomimetic, compact, low-cost and lightweight 3D printed humanoid hand (TCP Hand) that is actuated by twisted and coiled polymeric (TCP) artificial muscles. The TCP muscles were recently introduced and provided unprecedented strain, mechanical work, and lifecycle (Haines et al 2014 Science 343 868-72). The five-fingered humanoid hand is under-actuated and has 16 degrees of freedom (DOF) in total (15 for fingers and 1 at the palm). In the under-actuated hand designs, a single actuator provides coupled motions at the phalanges of each finger. Two different designs are presented along with the essential elements consisting of actuators, springs, tendons and guide systems. Experiments were conducted to investigate the performance of the TCP muscles in response to the power input (power magnitude, type of wave form such as pulsed or square wave, and pulse duration) and the resulting actuation stroke and force generation. A kinematic model of the flexor tendons was developed to simulate the flexion motion and compare with experimental results. For fast finger movements, short high-power pulses were employed. Finally, we demonstrated the grasping of various objects using the humanoid TCP hand showing an array of functions similar to a natural hand.

  10. Conway Morris: Extraterrestrials: Aliens like us?

    NASA Astrophysics Data System (ADS)

    Morris, Simon Conway

    2005-08-01

    So what are they going to be like, those long-expected extraterrestrials? Hideous hydrocarbon arachnoids, waving laser cannons as they chase screaming humans, repulsively surveying the scene through empathy-free compound eyes? Or maybe laughing bipeds, chatting away, holding a glass of wine, a bit like us?

  11. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  12. Robonaut 2 on the International Space Station: Status Update and Preparations for IVA Mobility

    NASA Technical Reports Server (NTRS)

    Ahlstrom, Thomas D.; Diftler, Myron E.; Berka, Reginald B.; Badger, Julia M.; Yayathi, Sandeep; Curtis, Andrew W.; Joyce, Charles A.

    2013-01-01

    Robotics engineers, ground controllers and International Space Station (ISS) crew have been running successful experiments using Robonaut 2 (R2) on-board the ISS for more than a year. This humanoid upper body robot continues to expand its list of achievements and its capabilities to safely demonstrate maintenance and servicing tasks while working alongside human crewmembers. The next phase of the ISS R2 project will transition from a stationary Intra Vehicular Activity (IVA) upper body using a power/data umbilical, to an IVA mobile system with legs for repositioning, a battery backpack power supply, and wireless communications. These upgrades will enable the R2 team to evaluate hardware performance and to develop additional control algorithms and control verification techniques with R2 inside the ISS in preparation for the Extra Vehicular Activity (EVA) phase of R2 operations. As R2 becomes more capable in assisting with maintenance tasks, with minimal supervision, including repositioning itself to different work sites, the ISS crew will be burdened with fewer maintenance chores, leaving them more time to conduct other activities. R2's developers at the Johnson Space Center (JSC) are preparing the R2 IVA mobility hardware and software upgrades for delivery to the ISS in late 2013. This paper summarizes R2 ISS achievements to date, briefly describes the R2 IVA mobility upgrades, and discusses the R2 IVA mobility objectives and plans.

  13. A future of living machines?: International trends and prospects in biomimetic and biohybrid systems

    NASA Astrophysics Data System (ADS)

    Prescott, Tony J.; Lepora, Nathan; Vershure, Paul F. M. J.

    2014-03-01

    Research in the fields of biomimetic and biohybrid systems is developing at an accelerating rate. Biomimetics can be understood as the development of new technologies using principles abstracted from the study of biological systems, however, biomimetics can also be viewed from an alternate perspective as an important methodology for improving our understanding of the world we live in and of ourselves as biological organisms. A biohybrid entity comprises at least one artificial (engineered) component combined with a biological one. With technologies such as microscale mobile computing, prosthetics and implants, humankind is moving towards a more biohybrid future in which biomimetics helps us to engineer biocompatible technologies. This paper reviews recent progress in the development of biomimetic and biohybrid systems focusing particularly on technologies that emulate living organisms—living machines. Based on our recent bibliographic analysis [1] we examine how biomimetics is already creating life-like robots and identify some key unresolved challenges that constitute bottlenecks for the field. Drawing on our recent research in biomimetic mammalian robots, including humanoids, we review the future prospects for such machines and consider some of their likely impacts on society, including the existential risk of creating artifacts with significant autonomy that could come to match or exceed humankind in intelligence. We conclude that living machines are more likely to be a benefit than a threat but that we should also ensure that progress in biomimetics and biohybrid systems is made with broad societal consent.

  14. Feasibility study of a randomised controlled trial to investigate the effectiveness of using a humanoid robot to improve the social skills of children with autism spectrum disorder (Kaspar RCT): a study protocol

    PubMed Central

    Mengoni, Silvana E; Irvine, Karen; Thakur, Deepshikha; Barton, Garry; Dautenhahn, Kerstin; Guldberg, Karen; Robins, Ben; Wellsted, David; Sharma, Shivani

    2017-01-01

    Introduction Interventions using robot-assisted therapy may be beneficial for the social skills development of children with autism spectrum disorder (ASD); however, randomised controlled trials (RCTs) are lacking. The present research aims to assess the feasibility of conducting an RCT evaluating the effectiveness of a social skills intervention using Kinesics and Synchronisation in Personal Assistant Robotics (Kaspar) with children with ASD. Methods and analysis Forty children will be recruited. Inclusion criteria are the following: aged 5–10 years, confirmed ASD diagnosis, IQ over 70, English-language comprehension, a carer who can complete questionnaires in English and no current participation in a private social communication intervention. Children will be randomised to receive an intervention with a therapist and Kaspar, or with the therapist only. They will receive two familiarisation sessions and six treatment sessions for 8 weeks. They will be assessed at baseline, and at 10 and 22 weeks after baseline. The primary outcome of this study is to evaluate whether the predetermined feasibility criteria for a full-scale trial are met. The potential primary outcome measures for a full-scale trial are the Social Communication Questionnaire and the Social Skills Improvement System. We will conduct a preliminary economic analysis. After the study has ended, a sample of 20 participants and their families will be invited to participate in semistructured interviews to explore the feasibility and acceptability of the study’s methods and intervention. Ethics and dissemination Parents/carers will provide informed consent, and children will give assent, where appropriate. Care will be taken to avoid pressure or coercion to participate. Aftercare is available from the recruiting NHS Trust, and a phased withdrawal protocol will be followed if children become excessively attached to the robot. The results of the study will be disseminated to academic audiences and non-academic stakeholders, for example, families of children with ASD, support groups, clinicians and charities. Trial registration number ISRCTN registry (ISRCTN14156001); Pre-results. PMID:28645986

  15. Feasibility study of a randomised controlled trial to investigate the effectiveness of using a humanoid robot to improve the social skills of children with autism spectrum disorder (Kaspar RCT): a study protocol.

    PubMed

    Mengoni, Silvana E; Irvine, Karen; Thakur, Deepshikha; Barton, Garry; Dautenhahn, Kerstin; Guldberg, Karen; Robins, Ben; Wellsted, David; Sharma, Shivani

    2017-06-22

    Interventions using robot-assisted therapy may be beneficial for the social skills development of children with autism spectrum disorder (ASD); however, randomised controlled trials (RCTs) are lacking. The present research aims to assess the feasibility of conducting an RCT evaluating the effectiveness of a social skills intervention using Kinesics and Synchronisation in Personal Assistant Robotics (Kaspar) with children with ASD. Forty children will be recruited. Inclusion criteria are the following: aged 5-10 years, confirmed ASD diagnosis, IQ over 70, English-language comprehension, a carer who can complete questionnaires in English and no current participation in a private social communication intervention. Children will be randomised to receive an intervention with a therapist and Kaspar, or with the therapist only. They will receive two familiarisation sessions and six treatment sessions for 8 weeks. They will be assessed at baseline, and at 10 and 22 weeks after baseline. The primary outcome of this study is to evaluate whether the predetermined feasibility criteria for a full-scale trial are met. The potential primary outcome measures for a full-scale trial are the Social Communication Questionnaire and the Social Skills Improvement System. We will conduct a preliminary economic analysis. After the study has ended, a sample of 20 participants and their families will be invited to participate in semistructured interviews to explore the feasibility and acceptability of the study's methods and intervention. Parents/carers will provide informed consent, and children will give assent, where appropriate. Care will be taken to avoid pressure or coercion to participate. Aftercare is available from the recruiting NHS Trust, and a phased withdrawal protocol will be followed if children become excessively attached to the robot. The results of the study will be disseminated to academic audiences and non-academic stakeholders, for example, families of children with ASD, support groups, clinicians and charities. ISRCTN registry (ISRCTN14156001); Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. In-vehicle group activity modeling and simulation in sensor-based virtual environment

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.

  17. Empirical modeling of dynamic behaviors of pneumatic artificial muscle actuators.

    PubMed

    Wickramatunge, Kanchana Crishan; Leephakpreeda, Thananchai

    2013-11-01

    Pneumatic Artificial Muscle (PAM) actuators yield muscle-like mechanical actuation with high force to weight ratio, soft and flexible structure, and adaptable compliance for rehabilitation and prosthetic appliances to the disabled as well as humanoid robots or machines. The present study is to develop empirical models of the PAM actuators, that is, a PAM coupled with pneumatic control valves, in order to describe their dynamic behaviors for practical control design and usage. Empirical modeling is an efficient approach to computer-based modeling with observations of real behaviors. Different characteristics of dynamic behaviors of each PAM actuator are due not only to the structures of the PAM actuators themselves, but also to the variations of their material properties in manufacturing processes. To overcome the difficulties, the proposed empirical models are experimentally derived from real physical behaviors of the PAM actuators, which are being implemented. In case studies, the simulated results with good agreement to experimental results, show that the proposed methodology can be applied to describe the dynamic behaviors of the real PAM actuators. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  18. "Artificial humans": Psychology and neuroscience perspectives on embodiment and nonverbal communication.

    PubMed

    Vogeley, Kai; Bente, Gary

    2010-01-01

    "Artificial humans", so-called "Embodied Conversational Agents" and humanoid robots, are assumed to facilitate human-technology interaction referring to the unique human capacities of interpersonal communication and social information processing. While early research and development in artificial intelligence (AI) focused on processing and production of natural language, the "new AI" has also taken into account the emotional and relational aspects of communication with an emphasis both on understanding and production of nonverbal behavior. This shift in attention in computer science and engineering is reflected in recent developments in psychology and social cognitive neuroscience. This article addresses key challenges which emerge from the goal to equip machines with socio-emotional intelligence and to enable them to interpret subtle nonverbal cues and to respond to social affordances with naturally appearing behavior from both perspectives. In particular, we propose that the creation of credible artificial humans not only defines the ultimate test for our understanding of human communication and social cognition but also provides a unique research tool to improve our knowledge about the underlying psychological processes and neural mechanisms. Copyright © 2010. Published by Elsevier Ltd.

  19. Clifford support vector machines for classification, regression, and recurrence.

    PubMed

    Bayro-Corrochano, Eduardo Jose; Arana-Daniel, Nancy

    2010-11-01

    This paper introduces the Clifford support vector machines (CSVM) as a generalization of the real and complex-valued support vector machines using the Clifford geometric algebra. In this framework, we handle the design of kernels involving the Clifford or geometric product. In this approach, one redefines the optimization variables as multivectors. This allows us to have a multivector as output. Therefore, we can represent multiple classes according to the dimension of the geometric algebra in which we work. We show that one can apply CSVM for classification and regression and also to build a recurrent CSVM. The CSVM is an attractive approach for the multiple input multiple output processing of high-dimensional geometric entities. We carried out comparisons between CSVM and the current approaches to solve multiclass classification and regression. We also study the performance of the recurrent CSVM with experiments involving time series. The authors believe that this paper can be of great use for researchers and practitioners interested in multiclass hypercomplex computing, particularly for applications in complex and quaternion signal and image processing, satellite control, neurocomputation, pattern recognition, computer vision, augmented virtual reality, robotics, and humanoids.

  20. Basic emotions and adaptation. A computational and evolutionary model.

    PubMed

    Pacella, Daniela; Ponticorvo, Michela; Gigliotta, Onofrio; Miglino, Orazio

    2017-01-01

    The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions.

  1. Special Purpose Crew Restraints for Teleoperation

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Holden, Kritina; Norris, Lena

    2004-01-01

    With permanent human presence onboard the International Space Station (ISS), and long duration space missions being planned for the moon and Mars, humans will be living and working in microgravity over increasingly long periods of time. In addition to weightlessness, the confined nature of a spacecraft environment results in ergonomic challenges such as limited visibility, and access to the activity area. These challenges can result in prolonged periods of unnatural postures for the crew, ultimately causing pain, injury, and loss of productivity. Determining the right set of human factors requirements and providing an ergonomically designed environment is crucial to mission success. While a number of general purpose restraints have been used on ISS (handrails, foot loops), experience has shown that these general purpose restraints may not be optimal, or even acceptable for some tasks that have unique requirements. For example, some onboard activities require extreme stability (e.g., glovebox microsurgery), and others involve the use of arm, torso and foot movements in order to perform the task (e-g. robotic teleoperation); standard restraint systems will not work in these situations. The Usability Testing and Analysis Facility (WAF) at the NASA Johnson Space Center began evaluations of crew restraints for these special situations by looking at NASAs Robonaut. Developed by the Robot Systems Technology Branch, Robonaut is a humanoid robot that can be remotely operated through a tetepresence control system by an operator. It was designed to perform work in hazardous environments (e.g., Extra Vehicular Activities). A Robonaut restraint was designed, modeled for the population, and ultimately tested onboard the KC-135 microgravity aircraft. While in microgravity, participants were asked to get in and out of the restraint from different locations, perform maximum reach exercises, and finally to teleoperate Robonaut while in the restraint. The sessions were videotaped, and participants completed a questionnaire at the end of each flight day. Results from this evaluation are being used to develop the human factors design requirements for teleoperation tasks in microgravity.

  2. Topology search of 3-DOF translational parallel manipulators with three identical limbs for leg mechanisms

    NASA Astrophysics Data System (ADS)

    Wang, Mingfeng; Ceccarelli, Marco

    2015-07-01

    Three-degree of freedom(3-DOF) translational parallel manipulators(TPMs) have been widely studied both in industry and academia in the past decades. However, most architectures of 3-DOF TPMs are created mainly on designers' intuition, empirical knowledge, or associative reasoning and the topology synthesis researches of 3-DOF TPMs are still limited. In order to find out the atlas of designs for 3-DOF TPMs, a topology search is presented for enumeration of 3-DOF TPMs whose limbs can be modeled as 5-DOF serial chains. The proposed topology search of 3-DOF TPMs is aimed to overcome the sensitivities of the design solution of a 3-DOF TPM for a LARM leg mechanism in a biped robot. The topology search, which is based on the concept of generation and specialization in graph theory, is reported as a step-by-step procedure with desired specifications, principle and rules of generalization, design requirements and constraints, and algorithm of number synthesis. In order to obtain new feasible designs for a chosen example and to limit the search domain under general considerations, one topological generalized kinematic chain is chosen to be specialized. An atlas of new feasible designs is obtained and analyzed for a specific solution as leg mechanisms. The proposed methodology provides a topology search for 3-DOF TPMs for leg mechanisms, but it can be also expanded for other applications and tasks.

  3. Defining the Scope of Systems of Care: An Ecological Perspective

    ERIC Educational Resources Information Center

    Cook, James R.; Kilmer, Ryan P.

    2010-01-01

    The definition of a system of care (SOC) can guide those intending to develop and sustain SOCs. Hodges, Ferreira, Israel, and Mazza [Hodges, S., Ferreira, K., Israel, N., & Mazza, J. (in press). "Systems of care, featherless bipeds, and the measure of all things." "Evaluation and Program Planning"] have emphasized contexts in which services are…

  4. Thermo-electrochemical evaluation of lithium-ion batteries for space applications

    NASA Astrophysics Data System (ADS)

    Walker, W.; Yayathi, S.; Shaw, J.; Ardebili, H.

    2015-12-01

    Advanced energy storage and power management systems designed through rigorous materials selection, testing and analysis processes are essential to ensuring mission longevity and success for space exploration applications. Comprehensive testing of Boston Power Swing 5300 lithium-ion (Li-ion) cells utilized by the National Aeronautics and Space Administration (NASA) to power humanoid robot Robonaut 2 (R2) is conducted to support the development of a test-correlated Thermal Desktop (TD) Systems Improved Numerical Differencing Analyzer (SINDA) (TD-S) model for evaluation of power system thermal performance. Temperature, current, working voltage and open circuit voltage measurements are taken during nominal charge-discharge operations to provide necessary characterization of the Swing 5300 cells for TD-S model correlation. Building from test data, embedded FORTRAN statements directly simulate Ohmic heat generation of the cells during charge-discharge as a function of surrounding temperature, local cell temperature and state of charge. The unique capability gained by using TD-S is demonstrated by simulating R2 battery thermal performance in example orbital environments for hypothetical extra-vehicular activities (EVA) exterior to a small satellite. Results provide necessary demonstration of this TD-S technique for thermo-electrochemical analysis of Li-ion cells operating in space environments.

  5. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    NASA Astrophysics Data System (ADS)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  6. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm.

    PubMed

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  7. View Estimation Based on Value System

    NASA Astrophysics Data System (ADS)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  8. Embodied artificial agents for understanding human social cognition.

    PubMed

    Wykowska, Agnieszka; Chaminade, Thierry; Cheng, Gordon

    2016-05-05

    In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum 'What is a social agent?' © 2016 The Authors.

  9. Elastic MCF Rubber with Photovoltaics and Sensing on Hybrid Skin (H-Skin) for Artificial Skin by Utilizing Natural Rubber: 2nd Report on the Effect of Tension and Compression on the Hybrid Photo- and Piezo-Electricity Properties in Wet-Type Solar Cell Rubber.

    PubMed

    Shimada, Kunio

    2018-06-06

    In contrast to ordinary solid-state solar cells, a flexible, elastic, extensible and light-weight solar cell has the potential to be extremely useful in many new engineering applications, such as in the field of robotics. Therefore, we propose a new type of artificial skin for humanoid robots with hybrid functions, which we have termed hybrid skin (H-Skin). To realize the fabrication of such a solar cell, we have continued to utilize the principles of ordinary solid-state wet-type or dye-sensitized solar rubber as a follow-up study to the first report. In the first report, we dealt with both photovoltaic- and piezo-effects for dry-type magnetic compound fluid (MCF) rubber solar cells, which were generated because the polyisoprene, oleic acid of the magnetic fluid (MF), and water served as p- and n- semiconductors. In the present report, we deal with wet-type MCF rubber solar cells by using sensitized dyes and electrolytes. Photoreactions generated through the synthesis of these components were investigated by an experiment using irradiation with visible and ultraviolet light. In addition, magnetic clusters were formed by the aggregation of Fe₃O₄ in the MF and the metal particles created the hetero-junction structure of the semiconductors. In the MCF rubber solar cell, both photo- and piezo-electricity were generated using a physical model. The effects of tension and compression on their electrical properties were evaluated. Finally, we experimentally demonstrated the effect of the distance between the electrodes of the solar cell on photoelectricity and built-in electricity.

  10. Imagery May Arise from Associations Formed through Sensory Experience: A Network of Spiking Neurons Controlling a Robot Learns Visual Sequences in Order to Perform a Mental Rotation Task

    PubMed Central

    McKinstry, Jeffrey L.; Fleischer, Jason G.; Chen, Yanqing; Gall, W. Einar; Edelman, Gerald M.

    2016-01-01

    Mental imagery occurs “when a representation of the type created during the initial phases of perception is present but the stimulus is not actually being perceived.” How does the capability to perform mental imagery arise? Extending the idea that imagery arises from learned associations, we propose that mental rotation, a specific form of imagery, could arise through the mechanism of sequence learning–that is, by learning to regenerate the sequence of mental images perceived while passively observing a rotating object. To demonstrate the feasibility of this proposal, we constructed a simulated nervous system and embedded it within a behaving humanoid robot. By observing a rotating object, the system learns the sequence of neural activity patterns generated by the visual system in response to the object. After learning, it can internally regenerate a similar sequence of neural activations upon briefly viewing the static object. This system learns to perform a mental rotation task in which the subject must determine whether two objects are identical despite differences in orientation. As with human subjects, the time taken to respond is proportional to the angular difference between the two stimuli. Moreover, as reported in humans, the system fills in intermediate angles during the task, and this putative mental rotation activates the same pathways that are activated when the system views physical rotation. This work supports the proposal that mental rotation arises through sequence learning and the idea that mental imagery aids perception through learned associations, and suggests testable predictions for biological experiments. PMID:27653977

  11. Narrative Constructions for the Organization of Self Experience: Proof of Concept via Embodied Robotics

    PubMed Central

    Mealier, Anne-Laure; Pointeau, Gregoire; Mirliaz, Solène; Ogawa, Kenji; Finlayson, Mark; Dominey, Peter F.

    2017-01-01

    It has been proposed that starting from meaning that the child derives directly from shared experience with others, adult narrative enriches this meaning and its structure, providing causal links between unseen intentional states and actions. This would require a means for representing meaning from experience—a situation model—and a mechanism that allows information to be extracted from sentences and mapped onto the situation model that has been derived from experience, thus enriching that representation. We present a hypothesis and theory concerning how the language processing infrastructure for grammatical constructions can naturally be extended to narrative constructions to provide a mechanism for using language to enrich meaning derived from physical experience. Toward this aim, the grammatical construction models are augmented with additional structures for representing relations between events across sentences. Simulation results demonstrate proof of concept for how the narrative construction model supports multiple successive levels of meaning creation which allows the system to learn about the intentionality of mental states, and argument substitution which allows extensions to metaphorical language and analogical problem solving. Cross-linguistic validity of the system is demonstrated in Japanese. The narrative construction model is then integrated into the cognitive system of a humanoid robot that provides the memory systems and world-interaction required for representing meaning in a situation model. In this context proof of concept is demonstrated for how the system enriches meaning in the situation model that has been directly derived from experience. In terms of links to empirical data, the model predicts strong usage based effects: that is, that the narrative constructions used by children will be highly correlated with those that they experience. It also relies on the notion of narrative or discourse function words. Both of these are validated in the experimental literature. PMID:28861011

  12. Narrative Constructions for the Organization of Self Experience: Proof of Concept via Embodied Robotics.

    PubMed

    Mealier, Anne-Laure; Pointeau, Gregoire; Mirliaz, Solène; Ogawa, Kenji; Finlayson, Mark; Dominey, Peter F

    2017-01-01

    It has been proposed that starting from meaning that the child derives directly from shared experience with others, adult narrative enriches this meaning and its structure, providing causal links between unseen intentional states and actions. This would require a means for representing meaning from experience-a situation model-and a mechanism that allows information to be extracted from sentences and mapped onto the situation model that has been derived from experience, thus enriching that representation. We present a hypothesis and theory concerning how the language processing infrastructure for grammatical constructions can naturally be extended to narrative constructions to provide a mechanism for using language to enrich meaning derived from physical experience. Toward this aim, the grammatical construction models are augmented with additional structures for representing relations between events across sentences. Simulation results demonstrate proof of concept for how the narrative construction model supports multiple successive levels of meaning creation which allows the system to learn about the intentionality of mental states, and argument substitution which allows extensions to metaphorical language and analogical problem solving. Cross-linguistic validity of the system is demonstrated in Japanese. The narrative construction model is then integrated into the cognitive system of a humanoid robot that provides the memory systems and world-interaction required for representing meaning in a situation model. In this context proof of concept is demonstrated for how the system enriches meaning in the situation model that has been directly derived from experience. In terms of links to empirical data, the model predicts strong usage based effects: that is, that the narrative constructions used by children will be highly correlated with those that they experience. It also relies on the notion of narrative or discourse function words. Both of these are validated in the experimental literature.

  13. Passive motion paradigm: an alternative to optimal control.

    PubMed

    Mohan, Vishwanathan; Morasso, Pietro

    2011-01-01

    IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.

  14. Maneuvers during legged locomotion

    NASA Astrophysics Data System (ADS)

    Jindrich, Devin L.; Qiao, Mu

    2009-06-01

    Maneuverability is essential for locomotion. For animals in the environment, maneuverability is directly related to survival. For humans, maneuvers such as turning are associated with increased risk for injury, either directly through tissue loading or indirectly through destabilization. Consequently, understanding the mechanics and motor control of maneuverability is a critical part of locomotion research. We briefly review the literature on maneuvering during locomotion with a focus on turning in bipeds. Walking turns can use one of several different strategies. Anticipation can be important to adjust kinematics and dynamics for smooth and stable maneuvers. During running, turns may be substantially constrained by the requirement for body orientation to match movement direction at the end of a turn. A simple mathematical model based on the requirement for rotation to match direction can describe leg forces used by bipeds (humans and ostriches). During running turns, both humans and ostriches control body rotation by generating fore-aft forces. However, whereas humans must generate large braking forces to prevent body over-rotation, ostriches do not. For ostriches, generating the lateral forces necessary to change movement direction results in appropriate body rotation. Although ostriches required smaller braking forces due in part to increased rotational inertia relative to body mass, other movement parameters also played a role. Turning performance resulted from the coordinated behavior of an integrated biomechanical system. Results from preliminary experiments on horizontal-plane stabilization support the hypothesis that controlling body rotation is an important aspect of stable maneuvers. In humans, body orientation relative to movement direction is rapidly stabilized during running turns within the minimum of two steps theoretically required to complete analogous maneuvers. During straight running and cutting turns, humans exhibit spring-mass behavior in the horizontal plane. Changes in the horizontal projection of leg length were linearly related to changes in horizontal-plane leg forces. Consequently, the passive dynamic stabilization associated with spring-mass behavior may contribute to stability during maneuvers in bipeds. Understanding the mechanics of maneuverability will be important for understanding the motor control of maneuvers and also potentially be useful for understanding stability.

  15. Basic emotions and adaptation. A computational and evolutionary model

    PubMed Central

    2017-01-01

    The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual “sensations” based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual’s life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions. PMID:29107988

  16. Using step width to compare locomotor biomechanics between extinct, non-avian theropod dinosaurs and modern obligate bipeds.

    PubMed

    Bishop, P J; Clemente, C J; Weems, R E; Graham, D F; Lamas, L P; Hutchinson, J R; Rubenson, J; Wilson, R S; Hocknull, S A; Barrett, R S; Lloyd, D G

    2017-07-01

    How extinct, non-avian theropod dinosaurs locomoted is a subject of considerable interest, as is the manner in which it evolved on the line leading to birds. Fossil footprints provide the most direct evidence for answering these questions. In this study, step width-the mediolateral (transverse) distance between successive footfalls-was investigated with respect to speed (stride length) in non-avian theropod trackways of Late Triassic age. Comparable kinematic data were also collected for humans and 11 species of ground-dwelling birds. Permutation tests of the slope on a plot of step width against stride length showed that step width decreased continuously with increasing speed in the extinct theropods ( p < 0.001), as well as the five tallest bird species studied ( p < 0.01). Humans, by contrast, showed an abrupt decrease in step width at the walk-run transition. In the modern bipeds, these patterns reflect the use of either a discontinuous locomotor repertoire, characterized by distinct gaits (humans), or a continuous locomotor repertoire, where walking smoothly transitions into running (birds). The non-avian theropods are consequently inferred to have had a continuous locomotor repertoire, possibly including grounded running. Thus, features that characterize avian terrestrial locomotion had begun to evolve early in theropod history. © 2017 The Author(s).

  17. The functional origin of dinosaur bipedalism: Cumulative evidence from bipedally inclined reptiles and disinclined mammals.

    PubMed

    Persons, W Scott; Currie, Philip J

    2017-05-07

    Bipedalism is a trait basal to, and widespread among, dinosaurs. It has been previously argued that bipedalism arose in the ancestors of dinosaurs for the function of freeing the forelimbs to serve as predatory weapons. However, this argument does not explain why bipedalism was retained among numerous herbivorous groups of dinosaurs. We argue that bipedalism arose in the dinosaur line for the purpose of enhanced cursoriality. Modern facultatively bipedal lizards offer an analog for the first stages in the evolution of dinosaurian bipedalism. Many extant lizards assume a bipedal stance while attempting to flee predators at maximum speed. Bipedalism, when combined with a caudofemoralis musculature, has cursorial advantages because the caudofemoralis provides a greater source of propulsion to the hindlimbs than is generally available to the forelimbs. That cursorial advantage explains the relative abundance of cursorial facultative bipeds and obligate bipeds among fossil diapsids and the relative scarcity of either among mammals. Having lost their caudofemoralis in the Permian, perhaps in the context of adapting to a fossorial lifestyle, the mammalian line has been disinclined towards bipedalism, but, having never lost the caudofemoralis of their ancestors, cursorial avemetatarsalians (bird-line archosaurs) were naturally inclined towards bipedalism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. KSC-2012-4344

    NASA Image and Video Library

    2012-08-09

    CAPE CANAVERAL, Fla. – During a free-flight test of the Project Morpheus vehicle at the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida, the vehicle lifted off the ground and then experienced a hardware component failure, which prevented it from maintaining stable flight. Engineers are looking into the test data and the agency will release information as it becomes available. Failures such as these were anticipated prior to the test, and are part of the development process for any complex spaceflight hardware. Testing of the prototype lander had been ongoing at NASA’s Johnson Space Center in Houston in preparation for its first free-flight test at Kennedy Space Center. Morpheus was manufactured and assembled at JSC and Armadillo Aerospace. Morpheus is large enough to carry 1,100 pounds of cargo to the moon – for example, a humanoid robot, a small rover, or a small laboratory to convert moon dust into oxygen. The primary focus of the test is to demonstrate an integrated propulsion and guidance, navigation and control system that can fly a lunar descent profile to exercise the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, safe landing sensors and closed-loop flight control. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA

  19. Tactile Perception of Roughness and Hardness to Discriminate Materials by Friction-Induced Vibration

    PubMed Central

    Zhao, Xuezeng

    2017-01-01

    The human fingertip is an exquisitely powerful bio-tactile sensor in perceiving different materials based on various highly-sensitive mechanoreceptors distributed all over the skin. The tactile perception of surface roughness and material hardness can be estimated by skin vibrations generated during a fingertip stroking of a surface instead of being maintained in a static position. Moreover, reciprocating sliding with increasing velocities and pressures are two common behaviors in humans to discriminate different materials, but the question remains as to what the correlation of the sliding velocity and normal load on the tactile perceptions of surface roughness and hardness is for material discrimination. In order to investigate this correlation, a finger-inspired crossed-I beam structure tactile tester has been designed to mimic the anthropic tactile discrimination behaviors. A novel method of characterizing the fast Fourier transform integral (FFT) slope of the vibration acceleration signal generated from fingertip rubbing on surfaces at increasing sliding velocity and normal load, respectively, are defined as kv and kw, and is proposed to discriminate the surface roughness and hardness of different materials. Over eight types of materials were tested, and they proved the capability and advantages of this high tactile-discriminating method. Our study may find applications in investigating humanoid robot perceptual abilities. PMID:29182538

  20. Revisiting the body-schema concept in the context of whole-body postural-focal dynamics.

    PubMed

    Morasso, Pietro; Casadio, Maura; Mohan, Vishwanathan; Rea, Francesco; Zenzeri, Jacopo

    2015-01-01

    The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.

Top