Sample records for humanoid robot platform

  1. Multi-layer robot skin with embedded sensors and muscles

    NASA Astrophysics Data System (ADS)

    Tomar, Ankit; Tadesse, Yonas

    2016-04-01

    Soft artificial skin with embedded sensors and actuators is proposed for a crosscutting study of cognitive science on a facial expressive humanoid platform. This paper focuses on artificial muscles suitable for humanoid robots and prosthetic devices for safe human-robot interactions. Novel composite artificial skin consisting of sensors and twisted polymer actuators is proposed. The artificial skin is conformable to intricate geometries and includes protective layers, sensor layers, and actuation layers. Fluidic channels are included in the elastomeric skin to inject fluids in order to control actuator response time. The skin can be used to develop facially expressive humanoid robots or other soft robots. The humanoid robot can be used by computer scientists and other behavioral science personnel to test various algorithms, and to understand and develop more perfect humanoid robots with facial expression capability. The small-scale humanoid robots can also assist ongoing therapeutic treatment research with autistic children. The multilayer skin can be used for many soft robots enabling them to detect both temperature and pressure, while actuating the entire structure.

  2. Balancing Theory and Practical Work in a Humanoid Robotics Course

    ERIC Educational Resources Information Center

    Wolff, Krister; Wahde, Mattias

    2010-01-01

    In this paper, we summarize our experiences from teaching a course in humanoid robotics at Chalmers University of Technology in Goteborg, Sweden. We describe the robotic platform used in the course and we propose the use of a custom-built robot consisting of standard electronic and mechanical components. In our experience, by using standard…

  3. The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy

    NASA Astrophysics Data System (ADS)

    Burns, Alec; Tadesse, Yonas

    2014-03-01

    In this paper, a humanoid robot is presented for ultimate use in the rehabilitation of children with mental disorders, such as autism. Creating affordable and efficient humanoids could assist the therapy in psychiatric disability by offering multimodal communication between the humanoid and humans. Yet, the humanoid development needs a seamless integration of artificial muscles, sensors, controllers and structures. We have designed a human-like robot that has 15 DOF, 580 mm tall and 925 mm arm span using a rapid prototyping system. The robot has a human-like appearance and movement. Flexible sensors around the arm and hands for safe human-robot interactions, and a two-wheel mobile platform for maneuverability are incorporated in the design. The robot has facial features for illustrating human-friendly behavior. The mechanical design of the robot and the characterization of the flexible sensors are presented. Comprehensive study on the upper body design, mobile base, actuators selection, electronics, and performance evaluation are included in this paper.

  4. Grounding language in action and perception: From cognitive agents to humanoid robots

    NASA Astrophysics Data System (ADS)

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.

  5. Natural Tasking of Robots Based on Human Interaction Cues

    DTIC Science & Technology

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  6. Grounding language in action and perception: from cognitive agents to humanoid robots.

    PubMed

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Complete low-cost implementation of a teleoperated control system for a humanoid robot.

    PubMed

    Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena

    2013-01-24

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.

  8. Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot

    PubMed Central

    Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena

    2013-01-01

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029

  9. Electroactive polymer and shape memory alloy actuators in biomimetics and humanoids

    NASA Astrophysics Data System (ADS)

    Tadesse, Yonas

    2013-04-01

    There is a strong need to replicate natural muscles with artificial materials as the structure and function of natural muscle is optimum for articulation. Particularly, the cylindrical shape of natural muscle fiber and its interconnected structure promote the critical investigation of artificial muscles geometry and implementation in the design phase of certain platforms. Biomimetic robots and Humanoid Robot heads with Facial Expressions (HRwFE) are some of the typical platforms that can be used to study the geometrical effects of artificial muscles. It has been shown that electroactive polymer and shape memory alloy artificial muscles and their composites are some of the candidate materials that may replicate natural muscles and showed great promise for biomimetics and humanoid robots. The application of these materials to these systems reveals the challenges and associated technologies that need to be developed in parallel. This paper will focus on the computer aided design (CAD) models of conductive polymer and shape memory alloys in various biomimetic systems and Humanoid Robot with Facial Expressions (HRwFE). The design of these systems will be presented in a comparative manner primarily focusing on three critical parameters: the stress, the strain and the geometry of the artificial muscle.

  10. A Reliability-Based Particle Filter for Humanoid Robot Self-Localization in RoboCup Standard Platform League

    PubMed Central

    Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.

    2013-01-01

    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098

  11. Can Robotic Interaction Improve Joint Attention Skills?

    PubMed Central

    Zheng, Zhi; Swanson, Amy R.; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A.; Weitlauf, Amy F.; Sarkar, Nilanjan

    2013-01-01

    Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms. PMID:24014194

  12. Can Robotic Interaction Improve Joint Attention Skills?

    PubMed

    Warren, Zachary E; Zheng, Zhi; Swanson, Amy R; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A; Weitlauf, Amy F; Sarkar, Nilanjan

    2015-11-01

    Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms.

  13. Pilot clinical application of an adaptive robotic system for young children with autism

    PubMed Central

    Bekele, Esubalew; Crittendon, Julie A; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E

    2013-01-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms. PMID:24104517

  14. Robonaut 2 and You: Specifying and Executing Complex Operations

    NASA Technical Reports Server (NTRS)

    Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia

    2017-01-01

    Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.

  15. An Integrated Framework for Human-Robot Collaborative Manipulation.

    PubMed

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  16. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  17. Pilot clinical application of an adaptive robotic system for young children with autism.

    PubMed

    Bekele, Esubalew; Crittendon, Julie A; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E

    2014-07-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms. © The Author(s) 2013.

  18. Social humanoid robot SARA: development of the wrist mechanism

    NASA Astrophysics Data System (ADS)

    Penčić, M.; Rackov, M.; Čavić, M.; Kiss, I.; Cioată, V. G.

    2018-01-01

    This paper presents the development of a wrist mechanism for humanoid robots. The research was conducted within the project which develops social humanoid robot Sara - a mobile anthropomorphic platform for researching the social behaviour of robots. There are two basic ways for the realization of humanoid wrist. The first one is based on biologically inspired structures that have variable stiffness, and the second one on low backlash mechanisms that have high stiffness. Our solution is low backlash differential mechanism that requires small actuators. Based on the kinematic-dynamic requirements, a dynamic model of the robot wrist is formed. A dynamic simulation for several hand positions was performed and the driving torques of the wrist mechanism were determined. The realized wrist has 2 DOFs and enables movements in the direction of flexion/extension 115°, ulnar/radial deviation ±45° and the combination of these two movements. It consists of a differential mechanism with three spur bevel gears, two of which are driving and identical, while the last one is the driven gear to which the robot hand is attached. Power transmission and motion from the actuator to the input links of the differential mechanism is realized with two parallel placed identical gear mechanisms. The wrist mechanism has high carrying capacity and reliability, high efficiency, a compact design and low backlash that provides high positioning accuracy and repeatability of movements, which is essential for motion control.

  19. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  20. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots

    PubMed Central

    Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524

  1. Open source hardware and software platform for robotics and artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  2. DARPA Robotics Challenge (DRC) Using Human-Machine Teamwork to Perform Disaster Response with a Humanoid Robot

    DTIC Science & Technology

    2017-02-01

    DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT FLORIDA INSTITUTE FOR HUMAN AND...AND SUBTITLE DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT 5a. CONTRACT NUMBER...Human and Machine Cognition (IHMC) from 2012-2016 through three phases of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge

  3. Vocal emotion of humanoid robots: a study from brain mechanism.

    PubMed

    Wang, Youhui; Hu, Xiaohua; Dai, Weihui; Zhou, Jie; Kuo, Taitzong

    2014-01-01

    Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.

  4. Upper Torso Control for HOAP-2 Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Sandoval, Steven P.

    2005-01-01

    Humanoid robots have similar physical builds and motion patterns as humans. Not only does this provide a suitable operating environment for the humanoid but it also opens up many research doors on how humans function. The overall objective is replacing humans operating in unsafe environments. A first target application is assembly of structures for future lunar-planetary bases. The initial development platform is a Fujitsu HOAP-2 humanoid robot. The goal for the project is to demonstrate the capability of a HOAP-2 to autonomously construct a cubic frame using provided tubes and joints. This task will require the robot to identify several items, pick them up, transport them to the build location, then properly assemble the structure. The ability to grasp and assemble the pieces will require improved motor control and the addition of tactile feedback sensors. In recent years, learning-based control is becoming more and more popular; for implementing this method we will be using the Adaptive Neural Fuzzy Inference System (ANFIS). When using neural networks for control, no complex models of the system must be constructed in advance-only input/output relationships are required to model the system.

  5. Robotic Literacy Learning Companions: Exploring Student Engagement with a Humanoid Robot in an Afterschool Literacy Program

    ERIC Educational Resources Information Center

    Levchak, Sofia

    2016-01-01

    This study was an investigation of the use of a NAO humanoid robot as an effective tool for engaging readers in an afterschool program as well as to find if increasing engagement using a humanoid robot would affect students' reading comprehension when compared to traditional forms of instruction. The targeted population of this study was…

  6. Vocal Emotion of Humanoid Robots: A Study from Brain Mechanism

    PubMed Central

    Wang, Youhui; Hu, Xiaohua; Zhou, Jie; Kuo, Taitzong

    2014-01-01

    Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings. PMID:24587712

  7. Design and motion control of bioinspired humanoid robot head from servo motors toward artificial muscles

    NASA Astrophysics Data System (ADS)

    Almubarak, Yara; Tadesse, Yonas

    2017-04-01

    The potential applications of humanoid robots in social environments, motivates researchers to design, and control biomimetic humanoid robots. Generally, people are more interested to interact with robots that have similar attributes and movements to humans. The head is one of most important part of any social robot. Currently, most humanoid heads use electrical motors, pneumatic actuators, and shape memory alloy (SMA) actuators for actuation. Electrical and pneumatic actuators take most of the space and would cause unsmooth motions. SMAs are expensive to use in humanoids. Recently, in many robotic projects, Twisted and Coiled Polymer (TCP) artificial muscles are used as linear actuators which take up little space compared to the motors. In this paper, we will demonstrate the designing process and motion control of a robotic head with TCP muscles. Servo motors and artificial muscles are used for actuating the head motion, which have been controlled by a cost efficient ARM Cortex-M7 based development board. A complete comparison between the two actuators is presented.

  8. A Course in Simulation and Demonstration of Humanoid Robot Motion

    ERIC Educational Resources Information Center

    Liu, Hsin-Yu; Wang, Wen-June; Wang, Rong-Jyue

    2011-01-01

    An introductory course for humanoid robot motion realization for undergraduate and graduate students is presented in this study. The basic operations of AX-12 motors and the mechanics combination of a 16 degrees-of-freedom (DOF) humanoid robot are presented first. The main concepts of multilink systems, zero moment point (ZMP), and feedback…

  9. Teen Sized Humanoid Robot: Archie

    NASA Astrophysics Data System (ADS)

    Baltes, Jacky; Byagowi, Ahmad; Anderson, John; Kopacek, Peter

    This paper describes our first teen sized humanoid robot Archie. This robot has been developed in conjunction with Prof. Kopacek’s lab from the Technical University of Vienna. Archie uses brushless motors and harmonic gears with a novel approach to position encoding. Based on our previous experience with small humanoid robots, we developed software to create, store, and play back motions as well as control methods which automatically balance the robot using feedback from an internal measurement unit (IMU).

  10. Humanoids in Support of Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier

    2006-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.

  11. Sports Training Support Method by Self-Coaching with Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Toyama, S.; Ikeda, F.; Yasaka, T.

    2016-09-01

    This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.

  12. Acquisition of Basic Behaviors through Teleoperation using Robonaut

    NASA Technical Reports Server (NTRS)

    Campbell, Christina

    2004-01-01

    My area of research is in artificial intelligence and robotics. The major platform of this research is NASA's Robonaut. This humanoid robot is located at the Johnson Space Center. Prior to receiving this grant, I was able to spend two summers in Houston working with the Robonaut team, which is headed by Rob Ambrose. My work centered on teaching Robonaut to grasp a wrench based on data gathered as a human teleoperated the robot. I tried to make the procedure as general as possible so that many different motions could be taught using this method.

  13. Humanoids for lunar and planetary surface operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing

    2005-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.

  14. Robonaut Mobile Autonomy: Initial Experiments

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.

    2006-01-01

    A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  15. Slow walking model for children with multiple disabilities via an application of humanoid robot

    NASA Astrophysics Data System (ADS)

    Wang, ZeFeng; Peyrodie, Laurent; Cao, Hua; Agnani, Olivier; Watelain, Eric; Wang, HaoPing

    2016-02-01

    Walk training research with children having multiple disabilities is presented. Orthosis aid in walking for children with multiple disabilities such as Cerebral Palsy continues to be a clinical and technological challenge. In order to reduce pain and improve treatment strategies, an intermediate structure - humanoid robot NAO - is proposed as an assay platform to study walking training models, to be transferred to future special exoskeletons for children. A suitable and stable walking model is proposed for walk training. It would be simulated and tested on NAO. This comparative study of zero moment point (ZMP) supports polygons and energy consumption validates the model as more stable than the conventional NAO. Accordingly direction variation of the center of mass and the slopes of linear regression knee/ankle angles, the Slow Walk model faithfully emulates the gait pattern of children.

  16. Acquiring neural signals for developing a perception and cognition model

    NASA Astrophysics Data System (ADS)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  17. Humanoid robotics in health care: An exploration of children's and parents' emotional reactions.

    PubMed

    Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan

    2015-07-01

    A new non-pharmacological method of distraction was tested with 57 children during their annual flu vaccination. Given children's growing enthusiasm for technological devices, a humanoid robot was programmed to interact with them while a nurse administered the vaccination. Children smiled more often with the robot, as compared to the control condition, but they did not cry less. Parents indicated that their children held stronger memories for the robot than for the needle, wanted the robot in the future, and felt empowered to cope. We conclude that children and their parents respond positively to a humanoid robot at the bedside. © The Author(s) 2013.

  18. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    DTIC Science & Technology

    2006-01-01

    Brooks, R., Bryson, J., Marjanovic , M., Stein, L. A., & Wessler, M. (1996), Humanoid Soft- ware, Technical report, MIT Arti cial Intelli- gence Lab...8217, Journal of Biomechanics 19, 231{238. Marjanovic , M. (1995), Learning Functional Maps Between Sensorimotor Systems on a Humanoid Robot, Master’s thesis, MIT

  19. The Paradigm of Utilizing Robots in the Teaching Process: A Comparative Study

    ERIC Educational Resources Information Center

    Bacivarov, Ioan C.; Ilian, Virgil L. M.

    2012-01-01

    This paper discusses a comparative study of the effects of using a humanoid robot for introducing students to personal robotics. Even if a humanoid robot is one of the more complicated types of robots, comprehension was not an issue. The study highlighted the importance of using real hardware for teaching such complex subjects as opposed to…

  20. Generate an Optimum Lightweight Legs Structure Design Based on Critical Posture in A-FLoW Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Luthfi, A.; Subhan, K. A.; Eko H, B.; Sanggar, D. R.; Pramadihanto, D.

    2018-04-01

    Lightweight construction and energy efficiency play an important role in humanoid robot development. The application of computer-aided engineering (CAE) in the development process is one of the possibilities to achieve the appropriate reduction of the weight. This paper describes a method to generate an optimum lightweight legs structure design based on critical posture during walking locomotion in A-FLoW Humanoid robot.The criticalposture can be obtained from the highest forces and moments in each joint of the robot body during walking locomotion. From the finite element analysis (FEA) result can be realized leg structure design of A-FLoW humanoid robot with a maximum displacement value of 0.05 mmand weight reduction about 0.598 Kg from the thigh structure and a maximum displacement value of 0,13 mmand weight reduction about 0.57 kg from the shin structure.

  1. SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan

    2015-11-24

    Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.

  2. SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots

    PubMed Central

    Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan

    2015-01-01

    Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051

  3. Social cognitive neuroscience and humanoid robotics.

    PubMed

    Chaminade, Thierry; Cheng, Gordon

    2009-01-01

    We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.

  4. Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD.

    PubMed

    Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan

    2015-12-01

    Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and feedback. The system was designed to incorporate both a humanoid robot and a human examiner. We compared child performance within system across these conditions in a sample of preschool children with ASD (n = 8) and a control sample of typically developing children (n = 8). The system was well-tolerated in the sample, children with ASD exhibited greater attention to the robotic system than the human administrator, and for children with ASD imitation performance appeared superior during the robotic interaction.

  5. Foot Placement Modification for a Biped Humanoid Robot with Narrow Feet

    PubMed Central

    Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion. PMID:24592154

  6. Foot placement modification for a biped humanoid robot with narrow feet.

    PubMed

    Hashimoto, Kenji; Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion.

  7. The second me: Seeing the real body during humanoid robot embodiment produces an illusion of bi-location.

    PubMed

    Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane

    2016-11-01

    Whole-body embodiment studies have shown that synchronized multi-sensory cues can trick a healthy human mind to perceive self-location outside the bodily borders, producing an illusion that resembles an out-of-body experience (OBE). But can a healthy mind also perceive the sense of self in more than one body at the same time? To answer this question, we created a novel artificial reduplication of one's body using a humanoid robot embodiment system. We first enabled individuals to embody the humanoid robot by providing them with audio-visual feedback and control of the robot head movements and walk, and then explored the self-location and self-identification perceived by them when they observed themselves through the embodied robot. Our results reveal that, when individuals are exposed to the humanoid body reduplication, they experience an illusion that strongly resembles heautoscopy, suggesting that a healthy human mind is able to bi-locate in two different bodies simultaneously. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Humanoid assessing rehabilitative exercises.

    PubMed

    Simonov, M; Delconte, G

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "New Methodologies for Patients Rehabilitation". The article presents the approach in which the rehabilitative exercise prepared by healthcare professional is encoded as formal knowledge and used by humanoid robot to assist patients without involving other care actors. The main objective is the use of humanoids in rehabilitative care. An example is pulmonary rehabilitation in COPD patients. Another goal is the automated judgment functionality to determine how the rehabilitation exercise matches the pre-programmed correct sequence. We use the Aldebaran Robotics' NAO humanoid to set up artificial cognitive application. Pre-programmed NAO induces elderly patient to undertake humanoid-driven rehabilitation exercise, but needs to evaluate the human actions against the correct template. Patient is observed using NAO's eyes. We use the Microsoft Kinect SDK to extract motion path from the humanoid's recorded video. We compare human- and humanoid-operated process sequences by using the Dynamic Time Warping (DTW) and test the prototype. This artificial cognitive software showcases the use of DTW algorithm to enable humanoids to judge in near real-time about the correctness of rehabilitative exercises performed by patients following the robot's indications. One could enable better sustainable rehabilitative care services in remote residential settings by combining intelligent applications piloting humanoids with the DTW pattern matching algorithm applied at run time to compare humanoid- and human-operated process sequences. In turn, it will lower the need of human care.

  9. Can We Talk to Robots? Ten-Month-Old Infants Expected Interactive Humanoid Robots to Be Talked to by Persons

    ERIC Educational Resources Information Center

    Arita, A.; Hiraki, K.; Kanda, T.; Ishiguro, H.

    2005-01-01

    As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the…

  10. Influence of facial feedback during a cooperative human-robot task in schizophrenia.

    PubMed

    Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane

    2017-11-03

    Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.

  11. Artificial heart for humanoid robot

    NASA Astrophysics Data System (ADS)

    Potnuru, Akshay; Wu, Lianjun; Tadesse, Yonas

    2014-03-01

    A soft robotic device inspired by the pumping action of a biological heart is presented in this study. Developing artificial heart to a humanoid robot enables us to make a better biomedical device for ultimate use in humans. As technology continues to become more advanced, the methods in which we implement high performance and biomimetic artificial organs is getting nearer each day. In this paper, we present the design and development of a soft artificial heart that can be used in a humanoid robot and simulate the functions of a human heart using shape memory alloy technology. The robotic heart is designed to pump a blood-like fluid to parts of the robot such as the face to simulate someone blushing or when someone is angry by the use of elastomeric substrates and certain features for the transport of fluids.

  12. The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink

    NASA Astrophysics Data System (ADS)

    Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli

    A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.

  13. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  14. Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism

    ERIC Educational Resources Information Center

    Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin

    2007-01-01

    The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot…

  15. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center’s Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of “dull, dirty or dangerous” tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center’s newest development areas can provide crew with low mass exercise capability and also augment an astronaut’s strength while wearing a space suit.This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center’s Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  16. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center's Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of "dull, dirty or dangerous" tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center's newest development areas can provide crew with low mass exercise capability and also augment an astronaut's strength while wearing a space suit. This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center's Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  17. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  18. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  19. Pre-Schoolers' Interest and Caring Behaviour around a Humanoid Robot

    ERIC Educational Resources Information Center

    Ioannou, Andri; Andreou, Emily; Christofi, Maria

    2015-01-01

    This exploratory case study involved a humanoid robot, NAO, and four pre-schoolers. NAO was placed in an indoor playground together with other toys and appeared as a peer who played, talked, danced and said stories. Analysis of video recordings focused on children's behaviour around NAO and how the robot gained children's attention and…

  20. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task

    PubMed Central

    Rogers, Wendy A.

    2015-01-01

    Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936

  1. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task.

    PubMed

    Prakash, Akanksha; Rogers, Wendy A

    2015-04-01

    Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.

  2. LARM PKM solutions for torso design in humanoid robots

    NASA Astrophysics Data System (ADS)

    Ceccarelli, Marco

    2014-12-01

    Human-like torso features are essential in humanoid robots. In this paper problems for design and operation of solutions for a robotic torso are discussed by referring to experiences and designs that have been developed at Laboratory of Robotics and Mechatronics (LARM) in Cassino, Italy. A new solution is presented with conceptual views as waist-trunk structure that makes a proper partition of the performance for walking and arm operations as sustained by a torso.

  3. Robot-Mediated Interviews - How Effective Is a Humanoid Robot as a Tool for Interviewing Young Children?

    PubMed Central

    Wood, Luke Jai; Dautenhahn, Kerstin; Rainer, Austen; Robins, Ben; Lehmann, Hagen; Syrdal, Dag Sverre

    2013-01-01

    Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications. PMID:23533625

  4. "Robovie, You'll Have to Go into the Closet Now": Children's Social and Moral Relationships with a Humanoid Robot

    ERIC Educational Resources Information Center

    Kahn, Peter H., Jr.; Kanda, Takayuki; Ishiguro, Hiroshi; Freier, Nathan G.; Severson, Rachel L.; Gill, Brian T.; Ruckert, Jolina H.; Shen, Solace

    2012-01-01

    Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter…

  5. Robonaut: a robot designed to work with humans in space

    NASA Technical Reports Server (NTRS)

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  6. Robonaut: a robot designed to work with humans in space.

    PubMed

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  7. From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot

    DTIC Science & Technology

    2003-06-01

    pages 961-968. Brooks, R. A., Breazeal, C., Marjanovic , M., and Scassellati, B. (1999). The Cog project: Building a humanoid robot. Lecture Notes in...investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38. Marjanovic , M. (1995). Learning functional maps between sensorimotor

  8. Autonomous learning in humanoid robotics through mental imagery.

    PubMed

    Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo

    2013-05-01

    In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    NASA Astrophysics Data System (ADS)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  10. Control of humanoid robot via motion-onset visual evoked potentials

    PubMed Central

    Li, Wei; Li, Mengfan; Zhao, Jing

    2015-01-01

    This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task. PMID:25620918

  11. Drift-Free Humanoid State Estimation fusing Kinematic, Inertial and LIDAR Sensing

    DTIC Science & Technology

    2014-08-01

    registration to this map and other objects in the robot’s vicinity while also contributing to direct low-level control of a Boston Dynamics Atlas robot ...requirements. I. INTRODUCTION Dynamic locomotion of legged robotic systems remains an open and challenging research problem whose solution will enable...humanoids to perform tasks and reach places inaccessible to wheeled or tracked robots . Several research institutions are developing walking and running

  12. Supervising Remote Humanoids Across Intermediate Time Delay

    NASA Technical Reports Server (NTRS)

    Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Rabe, Kenneth; Allan, Mark

    2006-01-01

    The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented.

  13. Toward humanoid robots for operations in complex urban environments

    NASA Astrophysics Data System (ADS)

    Pratt, Jerry E.; Neuhaus, Peter; Johnson, Matthew; Carff, John; Krupp, Ben

    2010-04-01

    Many infantry operations in urban environments, such as building clearing, are extremely dangerous and difficult and often result in high casualty rates. Despite the fast pace of technological progress in many other areas, the tactics and technology deployed for many of these dangerous urban operation have not changed much in the last 50 years. While robots have been extremely useful for improvised explosive device (IED) detonation, under-vehicle inspection, surveillance, and cave exploration, there is still no fieldable robot that can operate effectively in cluttered streets and inside buildings. Developing a fieldable robot that can maneuver in complex urban environments is challenging due to narrow corridors, stairs, rubble, doors and cluttered doorways, and other obstacles. Typical wheeled and tracked robots have trouble getting through most of these obstacles. A bipedal humanoid is ideally shaped for many of these obstacles because its legs are long and skinny. Therefore it has the potential to step over large barriers, gaps, rocks, and steps, yet squeeze through narrow passageways, and through narrow doorways. By being able to walk with one foot directly in front of the other, humanoids also have the potential to walk over narrow "balance beam" style objects and can cross a narrow row of stepping stones. We describe some recent advances in humanoid robots, particularly recovery from disturbances, such as pushes and walking over rough terrain. Our disturbance recovery algorithms are based on the concept of Capture Points. An N-Step Capture Point is a point on the ground in which a legged robot can step to in order to stop in N steps. The N-Step Capture Region is the set of all N-Step Capture Points. In order to walk without falling, a legged robot must step somewhere in the intersection between an N-Step Capture Region and the available footholds on the ground. We present results of push recovery using Capture Points on our humanoid robot M2V2.

  14. Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    PubMed Central

    Lazzeri, Nicole; Mazzei, Daniele; Greco, Alberto; Rotesi, Annalisa; Lanatà, Antonio; De Rossi, Danilo Emilio

    2015-01-01

    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models. PMID:26075199

  15. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  16. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135163 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  17. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135148 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  18. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135140 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  19. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135185 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  20. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135187 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  1. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135135 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  2. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135157 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  3. Combining gait optimization with passive system to increase the energy efficiency of a humanoid robot walking movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less

  4. Humanoid Robotics: Real-Time Object Oriented Programming

    NASA Technical Reports Server (NTRS)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  5. Robotic system construction with mechatronic components inverted pendulum: humanoid robot

    NASA Astrophysics Data System (ADS)

    Sandru, Lucian Alexandru; Crainic, Marius Florin; Savu, Diana; Moldovan, Cristian; Dolga, Valer; Preitl, Stefan

    2017-03-01

    Mechatronics is a new methodology used to achieve an optimal design of an electromechanical product. This methodology is collection of practices, procedures and rules used by those who work in particular branch of knowledge or discipline. Education in mechatronics at the Polytechnic University Timisoara is organized on three levels: bachelor, master and PhD studies. These activities refer and to design the mechatronics systems. In this context the design, implementation and experimental study of a family of mechatronic demonstrator occupy an important place. In this paper, a variant for a mechatronic demonstrator based on the combination of the electrical and mechanical components is proposed. The demonstrator, named humanoid robot, is equivalent with an inverted pendulum. Is presented the analyze of components for associated functions of the humanoid robot. This type of development the mechatronic systems by the combination of hardware and software, offers the opportunity to build the optimal solutions.

  6. Adaptive neural control for dual-arm coordination of humanoid robot with unknown nonlinearities in output mechanism.

    PubMed

    Liu, Zhi; Chen, Ci; Zhang, Yun; Chen, C L P

    2015-03-01

    To achieve an excellent dual-arm coordination of the humanoid robot, it is essential to deal with the nonlinearities existing in the system dynamics. The literatures so far on the humanoid robot control have a common assumption that the problem of output hysteresis could be ignored. However, in the practical applications, the output hysteresis is widely spread; and its existing limits the motion/force performances of the robotic system. In this paper, an adaptive neural control scheme, which takes the unknown output hysteresis and computational efficiency into account, is presented and investigated. In the controller design, the prior knowledge of system dynamics is assumed to be unknown. The motion error is guaranteed to converge to a small neighborhood of the origin by Lyapunov's stability theory. Simultaneously, the internal force is kept bounded and its error can be made arbitrarily small.

  7. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    PubMed Central

    Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver

    2017-01-01

    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882

  8. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform.

    PubMed

    Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver

    2017-01-01

    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.

  9. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

    PubMed Central

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-01-01

    Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777

  10. RoboJockey: Designing an Entertainment Experience with Robots.

    PubMed

    Yoshida, Shigeo; Shirokura, Takumi; Sugiura, Yuta; Sakamoto, Daisuke; Ono, Tetsuo; Inami, Masahiko; Igarashi, Takeo

    2016-01-01

    The RoboJockey entertainment system consists of a multitouch tabletop interface for multiuser collaboration. RoboJockey enables a user to choreograph a mobile robot or a humanoid robot by using a simple visual language. With RoboJockey, a user can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements and coordinate the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. RoboJockey was demonstrated to the public during two pilot studies, and the authors observed users' behavior. Here, they report the results of their observations and discuss the RoboJockey entertainment experience.

  11. Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context

    DTIC Science & Technology

    2016-12-01

    Case Study Our approach to coordination was initially motivated and developed in RoboCup soccer games. In fact, it has been first deployed on a team of...features a rather accurate model of the behavior and capabilities of the humanoid robot in the field. In the soccer case study , our goal is to...on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment. I. INTRODUCTION

  12. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  13. How Do Young Children Deal with Hybrids of Living and Non-Living Things: The Case of Humanoid Robots

    ERIC Educational Resources Information Center

    Saylor, Megan M.; Somanader, Mark; Levin, Daniel T.; Kawamura, Kazuhiko

    2010-01-01

    In this experiment, we tested children's intuitions about entities that bridge the contrast between living and non-living things. Three- and four-year-olds were asked to attribute a range of properties associated with living things and machines to novel category-defying complex artifacts (humanoid robots), a familiar living thing (a girl), and a…

  14. Building Robota, a mini-humanoid robot for the rehabilitation of children with autism.

    PubMed

    Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin

    2007-01-01

    The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot to assess children's imitation ability and to teach children simple coordinated behaviors. In this article, the authors review the recent technological developments that have made the Robota robots suitable for use with children with autism. They critically appraise the main outcomes of two sets of behavioral studies conducted with Robota and discuss how these results inform future development of the Robota robots and robots in general for the rehabilitation of children with complex developmental disabilities.

  15. Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.

    PubMed

    Leylavi Shoushtari, Ali

    2016-01-01

    Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion.

  16. Working and Learning with Knowledge in the Lobes of a Humanoid's Mind

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David

    2003-01-01

    Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).

  17. Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin.

    PubMed

    Roncone, Alessandro; Hoffmann, Matej; Pattacini, Ugo; Fadiga, Luciano; Metta, Giorgio

    2016-01-01

    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.

  18. Walk-Startup of a Two-Legged Walking Mechanism

    NASA Astrophysics Data System (ADS)

    Babković, Kalman; Nagy, László; Krklješ, Damir; Borovac, Branislav

    There is a growing interest towards humanoid robots. One of their most important characteristic is the two-legged motion - walk. Starting and stopping of humanoid robots introduce substantial delays. In this paper, the goal is to explore the possibility of using a short unbalanced state of the biped robot to quickly gain speed and achieve the steady state velocity during a period shorter than half of the single support phase. The proposed method is verified by simulation. Maintainig a steady state, balanced gait is not considered in this paper.

  19. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    PubMed

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  20. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    PubMed Central

    Alexandrov, Alexei V.; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A.; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary”) movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices. PMID:28487646

  1. Humanoid Robot Control System Balance Dance Indonesia and Reader Filters Using Complementary Angle Values

    NASA Astrophysics Data System (ADS)

    Sholihin; Susanti, Eka

    2018-02-01

    The development of increasingly advanced technology, make people want to be more developed and curiosity to know more to determine the development of advanced technology. Robot is a tool that can be used as a tool for people who have several advantages. Basically humanoid robot is a robot that resembles a human being with all the driving structure. In the application of this humanoid robot manufacture researchers use MPU6050 module which is an important component of the robot because it can provide a response to the angle reference axis X and Y reference axis, the reading corner still has noise if not filtered out beforehand. On the other hand the use of Complementary filters are the answer to reduce the noise. By arranging the filter coefficients and time sampling filter that affects the signal updates corner. The angle value will be the value of the sensor to the process to the PID system which generates output values that are integrated with the servo pulses. Researchers will test to get a reading of the most stable angle for this experiment is the "a" or the value of the filter coefficient = 0.96 and "dt" or the sampling time = 10 ms.

  2. Humanoid Robots: A New Kind of Tool

    DTIC Science & Technology

    2000-01-01

    Breazeal (Ferrell), R. Irie, C. C. Kemp, M. J. Marjanovic , B. Scassellati, M. M. Williamson, Alternate Essences of Intelligence, AAAI 1998. 2 R. A. Brooks, C...Breazeal, M. J. Marjanovic , B. Scassellati, M. M. Williamson, The Cog Project: Building a Humanoid Robot, Computation fbr Metaphors, Analogy and...Functions, Vol. 608, 1990, New York Academy of Sciences, pp. 637-676. 7 M. J. Marjanovic , B. Scassellati, M. M. Williamson, Self-Taught Visually-Guided

  3. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    PubMed

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  4. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks

    PubMed Central

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050

  5. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    PubMed Central

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  6. Robotic Assistance in Medication Management: Development and Evaluation of a Prototype.

    PubMed

    Schweitzer, Marco; Hoerbst, Alexander

    2016-01-01

    An increasing number of elderly people and the prevalence of multimorbid conditions often lead to age-related problems for patients in handling their common polypharmaceutical, domestic everyday medication. Ambient Assisted Living therefore provides means to support an elderly's everyday life. In the present paper we investigated the viability of using a commercial mass-produced humanoid robot system to support the domestic medication of an elderly person. A prototypical software application based on the NAO-robot platform was implemented to remind the patient for drug intakes, check for drug-drug-interactions, document the compliance and assist through the complete process of individual medication. A technical and functional evaluation of the system in a laboratory setting revealed versatile and viable results, though further investigations are needed to examine the practical use in an applied field.

  7. Feasibility of using a humanoid robot to elicit communicational response in children with mild autism

    NASA Astrophysics Data System (ADS)

    Malik, Norjasween Abdul; Shamsuddin, Syamimi; Yussof, Hanafiah; Azfar Miskam, Mohd; Che Hamid, Aminullah

    2013-12-01

    Research evidences are accumulating with regards to the potential use of robots for the rehabilitation of children with autism. The purpose of this paper is to elaborate on the results of communicational response in two children with autism during interaction with the humanoid robot NAO. Both autistic subjects in this study have been diagnosed with mild autism. Following the outcome from our first pilot study; the aim of this current experiment is to explore the application of NAO robot to engage with a child and further teach about emotions through a game-centered and song-based approach. The experiment procedure involved interaction between humanoid robot NAO with each child through a series of four different modules. The observation items are based on ten items selected and referenced to GARS-2 (Gilliam Autism Rating Scale-second edition) and also input from clinicians and therapists. The results clearly indicated that both of the children showed optimistic response through the interaction. Negative responses such as feeling scared or shying away from the robot were not detected. Two-way communication between the child and robot in real time significantly gives positive impact in the responses towards the robot. To conclude, it is feasible to include robot-based interaction specifically to elicit communicational response as a part of the rehabilitation intervention of children with autism.

  8. Humanoids for Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Myers, John; Newton, Jason; Csaszar, Ambrus; Gan, Quan; Hidalgo, Tim; Moore, Jeff; Sandoval, Steven; Xu, Jiajing; Schon, Aaron; hide

    2006-01-01

    Human-like shape makes humanoids well suited for being fostered/taught by humans, and for learning from humans, which we consider the best means to develop cognitive and perceptual/motor skills for truly intelligent, cognitive robots.

  9. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    PubMed

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  10. Honda humanoid robots development.

    PubMed

    Hirose, Masato; Ogawa, Kenichi

    2007-01-15

    Honda has been doing research on robotics since 1986 with a focus upon bipedal walking technology. The research started with straight and static walking of the first prototype two-legged robot. Now, the continuous transition from walking in a straight line to making a turn has been achieved with the latest humanoid robot ASIMO. ASIMO is the most advanced robot of Honda so far in the mechanism and the control system. ASIMO's configuration allows it to operate freely in the human living space. It could be of practical help to humans with its ability of five-finger arms as well as its walking function. The target of further development of ASIMO is to develop a robot to improve life in human society. Much development work will be continued both mechanically and electronically, staying true to Honda's 'challenging spirit'.

  11. Sociable Machines: Expressive Social Exchange between Humans and Robots

    DTIC Science & Technology

    2000-05-01

    many occasions about theories on emotion. I’ve cornered Robert Irie’s again and again about auditory processing. I’ve bugged Matto Marjanovic throughout...development at the MIT Artificial Intel- ligence Lab (Brooks, Breazeal, Marjanovic , Scassellati & Williamson 1999). Cog is a general purpose humanoid...RA-2, 253-262. Brooks, R. A., Breazeal, C., Marjanovic , M., Scassellati, B. & Williamson, M. M. (1999), The Cog Project: Building a Humanoid Robot, in

  12. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    PubMed

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  13. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  14. Adaptive Language Games with Robots

    NASA Astrophysics Data System (ADS)

    Steels, Luc

    2010-11-01

    This paper surveys recent research into language evolution using computer simulations and robotic experiments. This field has made tremendous progress in the past decade going from simple simulations of lexicon formation with animallike cybernetic robots to sophisticated grammatical experiments with humanoid robots.

  15. Towards Autonomous Operation of Robonaut 2

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hart, Stephen W.; Yamokoski, J. D.

    2011-01-01

    The Robonaut 2 (R2) platform, as shown in Figure 1, was designed through a collaboration between NASA and General Motors to be a capable robotic assistant with the dexterity similar to a suited astronaut [1]. An R2 robot was sent to the International Space Station (ISS) in February 2011 and, in doing so, became the first humanoid robot in space. Its capabilities are presently being tested and expanded to increase its usefulness to the crew. Current work on R2 includes the addition of a mobility platform to allow the robot to complete tasks (such as cleaning, maintenance, or simple construction activities) both inside and outside of the ISS. To support these new activities, R2's software architecture is being developed to provide efficient ways of programming robust and autonomous behavior. In particular, a multi-tiered software architecture is proposed that combines principles of low-level feedback control with higher-level planners that accomplish behavioral goals at the task level given the run-time context, user constraints, the health of the system, and so on. The proposed architecture is shown in Figure 2. At the lowest-level, the resource level, there exists the various sensory and motor signals available to the system. The sensory signals for a robot such as R2 include multiple channels of force/torque data, joint or Cartesian positions calculated through the robot's proprioception, and signals derived from objects observable by its cameras.

  16. Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    PubMed Central

    Roncone, Alessandro; Fadiga, Luciano; Metta, Giorgio

    2016-01-01

    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement. PMID:27711136

  17. Using a cognitive architecture for general purpose service robot control

    NASA Astrophysics Data System (ADS)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  18. Dissociated emergent-response system and fine-processing system in human neural network and a heuristic neural architecture for autonomous humanoid robots.

    PubMed

    Yan, Xiaodan

    2010-01-01

    The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.

  19. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  20. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  1. The Affordance Template ROS Package for Robot Task Programming

    NASA Technical Reports Server (NTRS)

    Hart, Stephen; Dinh, Paul; Hambuchen, Kimberly

    2015-01-01

    This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic XML description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications.

  2. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots

    PubMed Central

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people’s social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people’s judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior – (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people’s intentional stance toward the robot was in this case very similar to their stance toward the human. PMID:29184519

  3. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    PubMed

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Gyeong-Moon Park; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Yoo, Yong-Ho; Park, Gyeong-Moon; Kim, Jong-Hwan; Kim, Deok-Hwa

    2018-06-01

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  4. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  5. A cortically-inspired model for inverse kinematics computation of a humanoid finger with mechanically coupled joints.

    PubMed

    Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A

    2016-05-19

    The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.

  6. Robust sensorimotor representation to physical interaction changes in humanoid motion learning.

    PubMed

    Shimizu, Toshihiko; Saegusa, Ryo; Ikemoto, Shuhei; Ishiguro, Hiroshi; Metta, Giorgio

    2015-05-01

    This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.

  7. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    PubMed

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.

  8. Recent trends in humanoid robotics research: scientific background, applications, and implications.

    PubMed

    Solis, Jorge; Takanishi, Atsuo

    2010-11-01

    Even though the market size is still small at this moment, applied fields of robots are gradually spreading from the manufacturing industry to the others as one of the important components to support an aging society. For this purpose, the research on human-robot interaction (HRI) has been an emerging topic of interest for both basic research and customer application. The studies are especially focused on behavioral and cognitive aspects of the interaction and the social contexts surrounding it. As a part of these studies, the term of "roboethics" has been introduced as an approach to discuss the potentialities and the limits of robots in relation to human beings. In this article, we describe the recent research trends on the field of humanoid robotics. Their principal applications and their possible impact are discussed.

  9. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  10. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    PubMed Central

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  11. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  12. Reaching and Grasping a Glass of Water by Locked-In ALS Patients through a BCI-Controlled Humanoid Robot

    PubMed Central

    Spataro, Rossella; Chella, Antonio; Allison, Brendan; Giardina, Marcello; Sorbello, Rosario; Tramonte, Salvatore; Guger, Christoph; La Bella, Vincenzo

    2017-01-01

    Locked-in Amyotrophic Lateral Sclerosis (ALS) patients are fully dependent on caregivers for any daily need. At this stage, basic communication and environmental control may not be possible even with commonly used augmentative and alternative communication devices. Brain Computer Interface (BCI) technology allows users to modulate brain activity for communication and control of machines and devices, without requiring a motor control. In the last several years, numerous articles have described how persons with ALS could effectively use BCIs for different goals, usually spelling. In the present study, locked-in ALS patients used a BCI system to directly control the humanoid robot NAO (Aldebaran Robotics, France) with the aim of reaching and grasping a glass of water. Four ALS patients and four healthy controls were recruited and trained to operate this humanoid robot through a P300-based BCI. A few minutes training was sufficient to efficiently operate the system in different environments. Three out of the four ALS patients and all controls successfully performed the task with a high level of accuracy. These results suggest that BCI-operated robots can be used by locked-in ALS patients as an artificial alter-ego, the machine being able to move, speak and act in his/her place. PMID:28298888

  13. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  14. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  15. Inventing Japan's 'robotics culture': the repeated assembly of science, technology, and culture in social robotics.

    PubMed

    Sabanović, Selma

    2014-06-01

    Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.

  16. Method and apparatus for automatic control of a humanoid robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  17. Human brain spots emotion in non humanoid robots

    PubMed Central

    Foucher, Aurélie; Jouvent, Roland; Nadel, Jacqueline

    2011-01-01

    The computation by which our brain elaborates fast responses to emotional expressions is currently an active field of brain studies. Previous studies have focused on stimuli taken from everyday life. Here, we investigated event-related potentials in response to happy vs neutral stimuli of human and non-humanoid robots. At the behavioural level, emotion shortened reaction times similarly for robotic and human stimuli. Early P1 wave was enhanced in response to happy compared to neutral expressions for robotic as well as for human stimuli, suggesting that emotion from robots is encoded as early as human emotion expression. Congruent with their lower faceness properties compared to human stimuli, robots elicited a later and lower N170 component than human stimuli. These findings challenge the claim that robots need to present an anthropomorphic aspect to interact with humans. Taken together, such results suggest that the early brain processing of emotional expressions is not bounded to human-like arrangements embodying emotion. PMID:20194513

  18. GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.

    PubMed

    Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin

    2014-02-01

    We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.

  19. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  20. Embedded diagnostic, prognostic, and health management system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.

  1. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid

    PubMed Central

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot. PMID:26998923

  2. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  3. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    DTIC Science & Technology

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  4. A direct methanol fuel cell system to power a humanoid robot

    NASA Astrophysics Data System (ADS)

    Joh, Han-Ik; Ha, Tae Jung; Hwang, Sang Youp; Kim, Jong-Ho; Chae, Seung-Hoon; Cho, Jae Hyung; Prabhuram, Joghee; Kim, Soo-Kil; Lim, Tae-Hoon; Cho, Baek-Kyu; Oh, Jun-Ho; Moon, Sang Heup; Ha, Heung Yong

    In this study, a direct methanol fuel cell (DMFC) system, which is the first of its kind, has been developed to power a humanoid robot. The DMFC system consists of a stack, a balance of plant (BOP), a power management unit (PMU), and a back-up battery. The stack has 42 unit cells and is able to produce about 400 W at 19.3 V. The robot is 125 cm tall, weighs 56 kg, and consumes 210 W during normal operation. The robot is integrated with the DMFC system that powers the robot in a stable manner for more than 2 h. The power consumption by the robot during various motions is studied, and load sharing between the fuel cell and the back-up battery is also observed. The loss of methanol feed due to crossover and evaporation amounts to 32.0% and the efficiency of the DMFC system in terms of net electric power is 22.0%.

  5. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.

  6. Grounding Action Words in the Sensorimotor Interaction with the World: Experiments with a Simulated iCub Humanoid Robot

    PubMed Central

    Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony

    2010-01-01

    This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503

  7. A Control Framework for Anthropomorphic Biped Walking Based on Stabilizing Feedforward Trajectories.

    PubMed

    Rezazadeh, Siavash; Gregg, Robert D

    2016-10-01

    Although dynamic walking methods have had notable successes in control of bipedal robots in the recent years, still most of the humanoid robots rely on quasi-static Zero Moment Point controllers. This work is an attempt to design a highly stable controller for dynamic walking of a human-like model which can be used both for control of humanoid robots and prosthetic legs. The method is based on using time-based trajectories that can induce a highly stable limit cycle to the bipedal robot. The time-based nature of the controller motivates its use to entrain a model of an amputee walking, which can potentially lead to a better coordination of the interaction between the prosthesis and the human. The simulations demonstrate the stability of the controller and its robustness against external perturbations.

  8. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  9. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  10. Robot Rocket Rally

    NASA Image and Video Library

    2014-03-14

    CAPE CANAVERAL, Fla. – Students gather to watch as a DARwin-OP miniature humanoid robot from Virginia Tech Robotics demonstrates its soccer abilities at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett

  11. Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Lee, Jih-Hsien; Chao, Po-Yao; Wang, Chin-Yeh; Chen, Gwo-Dong

    2010-01-01

    As robot technologies develop, many researchers have tried to use robots to support education. Studies have shown that robots can help students develop problem-solving abilities and learn computer programming, mathematics, and science. However, few studies discuss the use of robots to facilitate the teaching of second languages. We discuss whether…

  12. Human-Robot Interaction: Status and Challenges.

    PubMed

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  13. Centaur: A Mobile Dexterous Humanoid for Surface Operations

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Ambrose, Robert O.; Goza, S. Michael; Junkin, Lucien; Neuhaus, Peter D.; Pratt, Jerry E.

    2005-01-01

    Future human and robotic planetary expeditions could benefit greatly from expanded Extra-Vehicular Activity (EVA) capabilities supporting a broad range of multiple, concurrent surface operations. Risky, expensive and complex, conventional EVAs are restricted in both duration and scope by consumables and available manpower, creating a resource management problem. A mobile, highly dexterous Extra-Vehicular Robotic (EVR) system called Centaur is proposed to cost-effectively augment human astronauts on surface excursions. The Centaur design combines a highly capable wheeled mobility platform with an anthropomorphic upper body mounted on a three degree-of-freedom waist. Able to use many ordinary handheld tools, the robot could conserve EVA hours by relieving humans of many routine inspection and maintenance chores and assisting them in more complex tasks, such as repairing other robots. As an astronaut surrogate, Centaur could take risks unacceptable to humans, respond more quickly to EVA emergencies and work much longer shifts. Though originally conceived as a system for planetary surface exploration, the Centaur concept could easily be adapted for terrestrial military applications such as de-Gig, surveillance and other hazardous duties.

  14. Artificial humanoid for the elderly people.

    PubMed

    Simou, Panagiota; Alexiou, Athanasios; Tiligadis, Konstantinos

    2015-01-01

    While frailty and other multi-scale factors have to be correlated during a geriatric assessment, few prototype robots have already been developed in order to measure and provide real-time information, concerning elderly daily activities. Cognitive impairment and alterations on daily functions should be immediate recognized from caregivers, in order to be prevented and probably treated. In this chapter we recognize the necessity of artificial robots during the personal service of the elderly population, not only as a mobile laboratory-geriatrician, but mainly as a socialized digital humanoid able to develop social behavior and activate memories and emotions.

  15. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    PubMed Central

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  16. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement.

    PubMed

    Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.

  17. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement

    PubMed Central

    Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554

  18. Robot Rocket Rally

    NASA Image and Video Library

    2014-03-14

    CAPE CANAVERAL, Fla. – A miniature humanoid robot known as DARwin-OP, from Virginia Tech Robotics, plays soccer with a red tennis ball for a crowd of students at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett

  19. Model-based Robotic Dynamic Motion Control for the Robonaut 2 Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hulse, Aaron M.; Taylor, Ross C.; Curtis, Andrew W.; Gooding, Dustin R.; Thackston, Allison

    2013-01-01

    Robonaut 2 (R2), an upper-body dexterous humanoid robot, has been undergoing experimental trials on board the International Space Station (ISS) for more than a year. R2 will soon be upgraded with two climbing appendages, or legs, as well as a new integrated model-based control system. This control system satisfies two important requirements; first, that the robot can allow humans to enter its workspace during operation and second, that the robot can move its large inertia with enough precision to attach to handrails and seat track while climbing around the ISS. This is achieved by a novel control architecture that features an embedded impedance control law on the motor drivers called Multi-Loop control which is tightly interfaced with a kinematic and dynamic coordinated control system nicknamed RoboDyn that resides on centralized processors. This paper presents the integrated control algorithm as well as several test results that illustrate R2's safety features and performance.

  20. Triggering social interactions: chimpanzees respond to imitation by a humanoid robot and request responses from it.

    PubMed

    Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A

    2014-05-01

    Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.

  1. Artificial heart for humanoid robot using coiled SMA actuators

    NASA Astrophysics Data System (ADS)

    Potnuru, Akshay; Tadesse, Yonas

    2015-03-01

    Previously, we have presented the design and characterization of artificial heart using cylindrical shape memory alloy (SMA) actuators for humanoids [1]. The robotic heart was primarily designed to pump a blood-like fluid to parts of the robot such as the face to simulate blushing or anger by the use of elastomeric substrates for the transport of fluids. It can also be used for other applications. In this paper, we present an improved design by using high strain coiled SMAs and a novel pumping mechanism that uses sequential actuation to create peristalsis-like motions, and hence pump the fluid. Various placements of actuators will be investigated with respect to the silicone elastomeric body. This new approach provides a better performance in terms of the fluid volume pumped.

  2. Actuator and electronics packaging for extrinsic humanoid hand

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Diftler, Myron A. (Inventor); Reich, David M. (Inventor); Askew, Scott R. (Inventor)

    2013-01-01

    The lower arm assembly for a humanoid robot includes an arm support having a first side and a second side, a plurality of wrist actuators mounted to the first side of the arm support, a plurality of finger actuators mounted to the second side of the arm support and a plurality of electronics also located on the first side of the arm support.

  3. Rotary Series Elastic Actuator

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Mehling, Joshua S. (Inventor); Parsons, Adam H. (Inventor); Griffith, Bryan Kristian (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Davis, Donald R. (Inventor); Ambrose, Robert O. (Inventor); Junkin, Lucien Q. (Inventor)

    2013-01-01

    A rotary actuator assembly is provided for actuation of an upper arm assembly for a dexterous humanoid robot. The upper arm assembly for the humanoid robot includes a plurality of arm support frames each defining an axis. A plurality of rotary actuator assemblies are each mounted to one of the plurality of arm support frames about the respective axes. Each rotary actuator assembly includes a motor mounted about the respective axis, a gear drive rotatably connected to the motor, and a torsion spring. The torsion spring has a spring input that is rotatably connected to an output of the gear drive and a spring output that is connected to an output for the joint.

  4. Rotary series elastic actuator

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Mehling, Joshua S. (Inventor); Parsons, Adam H. (Inventor); Griffith, Bryan Kristian (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Davis, Donald R. (Inventor); Ambrose, Robert O. (Inventor); Junkin, Lucien Q. (Inventor)

    2012-01-01

    A rotary actuator assembly is provided for actuation of an upper arm assembly for a dexterous humanoid robot. The upper arm assembly for the humanoid robot includes a plurality of arm support frames each defining an axis. A plurality of rotary actuator assemblies are each mounted to one of the plurality of arm support frames about the respective axes. Each rotary actuator assembly includes a motor mounted about the respective axis, a gear drive rotatably connected to the motor, and a torsion spring. The torsion spring has a spring input that is rotatably connected to an output of the gear drive and a spring output that is connected to an output for the joint.

  5. Predictive Interfaces for Long-Distance Tele-Operations

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas

    2005-01-01

    We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.

  6. Role Transfer for Robot Tasking

    DTIC Science & Technology

    2002-04-01

    Artificial Intel- ligence (AAAI-98). Brooks, R. A., Breazeal, C., Marjanovic , M. & Scassellati, B. (1999). The Cog project: Building a humanoid robot...scale investment in knowledge infrastructure, Communications of the ACM 38(11): 33-38. 33 Marjanovic , M. (1995). Learning functional maps between

  7. Design and development an insect-inspired humanoid gripper that is structurally sound, yet very flexible

    NASA Astrophysics Data System (ADS)

    Hajjaj, S.; Pun, N.

    2013-06-01

    One of the biggest challenges in mechanical robotics design is the balance between structural integrity and flexibility. An industrial robotic gripper could be technically advanced, however it contains only 1 Degree of Freedom (DOF). If one is to add more DOFs the design would become complex. On the other hand, the human wrist and fingers contain 23 DOFs, and is very lightweight and highly flexible. Robotics are becoming more and more part of our social life, they are more and more being incorporated in social, medical, and personal application. Therefore, for such robots to be effective, they need to mimic human performance, both in performance as well as in mechanical design. In this work, a Humanoid Gripper is designed and built to mimic a simplified version of a human wrist and fingers. This is attempted by mimicking insect and human designs of grippes. The main challenge was to insure that the gripper is structurally sound, but at the same time flexible and lightweight. A combination of light weight material and a unique design of finger actuators were applied. The gripper is controlled by a PARALLAX servo controller 28823 (PSCI), which mounted on the assembly itself. At the end, a 6 DOF humanoid gripper made of lightweight material, similar in size to the human arm, and is able to carry a weight of 1 Kg has been designed and built.

  8. HET2 Overview

    NASA Technical Reports Server (NTRS)

    Fong, Terrence W.; Bualat, Maria Gabriele; Diftler, Myron A.

    2015-01-01

    2015 mid-year review charts of the Human Exploration Telerobotics 2 project that describe the Astrobee free-flying robot and the Robonaut 2 humanoid robot. A planned replacement for Synchronized Position Hold, Engage, Reorient, Experimental Satellite (SPHERES), which is currently in use in the International Space Station (ISS).

  9. Challenges in Building Robots that Imitate People

    DTIC Science & Technology

    2000-01-01

    pages 25 40, 1998. R. Brooks, C. Breazeal (Ferrell), R. Irie, C. Kemp, M. Marjanovic , B. Scassellati, & M. Williamson. Alternative essences of...Breazeal (Ferrell), M. Marjanovic , B. Scassellati, and M. Williamson. The Cog project: building a humanoid robot. In C. Nehaniv, editor, Computationjbr

  10. Development of a neuromorphic control system for a lightweight humanoid robot

    NASA Astrophysics Data System (ADS)

    Folgheraiter, Michele; Keldibek, Amina; Aubakir, Bauyrzhan; Salakchinov, Shyngys; Gini, Giuseppina; Mauro Franchi, Alessio; Bana, Matteo

    2017-03-01

    A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.

  11. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    ERIC Educational Resources Information Center

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  12. From self-observation to imitation: visuomotor association on a robotic hand.

    PubMed

    Chaminade, Thierry; Oztop, Erhan; Cheng, Gordon; Kawato, Mitsuo

    2008-04-15

    Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.

  13. Investigating the ability to read others' intentions using humanoid robots.

    PubMed

    Sciutti, Alessandra; Ansuini, Caterina; Becchio, Cristina; Sandini, Giulio

    2015-01-01

    The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.

  14. Tendon Driven Finger Actuation System

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Reich, David M. (Inventor); Bridgwater, Lyndon (Inventor); Linn, Douglas Martin (Inventor); Askew, Scott R. (Inventor); Diftler, Myron A. (Inventor); Platt, Robert (Inventor); Hargrave, Brian (Inventor); Valvo, Michael C. (Inventor); Abdallah, Muhammad E. (Inventor); hide

    2013-01-01

    A humanoid robot includes a robotic hand having at least one finger. An actuation system for the robotic finger includes an actuator assembly which is supported by the robot and is spaced apart from the finger. A tendon extends from the actuator assembly to the at least one finger and ends in a tendon terminator. The actuator assembly is operable to actuate the tendon to move the tendon terminator and, thus, the finger.

  15. Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

    PubMed

    Li, Mengfan; Li, Wei; Zhou, Huihui

    2016-02-01

    Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.

  16. A pilot study for robot appearance preferences among high-functioning individuals with autism spectrum disorder: Implications for therapeutic use

    PubMed Central

    Warren, Zachary; Muramatsu, Taro; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Miyao, Masutomo; Nakano, Mitsuko; Mizushima, Sakae; Wakita, Yujin; Ishiguro, Hiroshi; Mimura, Masaru; Minabe, Yoshio; Kikuchi, Mitsuru

    2017-01-01

    Recent rapid technological advances have enabled robots to fulfill a variety of human-like functions, leading researchers to propose the use of such technology for the development and subsequent validation of interventions for individuals with autism spectrum disorder (ASD). Although a variety of robots have been proposed as possible therapeutic tools, the physical appearances of humanoid robots currently used in therapy with these patients are highly varied. Very little is known about how these varied designs are experienced by individuals with ASD. In this study, we systematically evaluated preferences regarding robot appearance in a group of 16 individuals with ASD (ages 10–17). Our data suggest that there may be important differences in preference for different types of robots that vary according to interaction type for individuals with ASD. Specifically, within our pilot sample, children with higher-levels of reported ASD symptomatology reported a preference for specific humanoid robots to those perceived as more mechanical or mascot-like. The findings of this pilot study suggest that preferences and reactions to robotic interactions may vary tremendously across individuals with ASD. Future work should evaluate how such differences may be systematically measured and potentially harnessed to facilitate meaningful interactive and intervention paradigms. PMID:29028837

  17. An artificial nociceptor based on a diffusive memristor.

    PubMed

    Yoon, Jung Ho; Wang, Zhongrui; Kim, Kyung Min; Wu, Huaqiang; Ravichandran, Vignesh; Xia, Qiangfei; Hwang, Cheol Seong; Yang, J Joshua

    2018-01-29

    A nociceptor is a critical and special receptor of a sensory neuron that is able to detect noxious stimulus and provide a rapid warning to the central nervous system to start the motor response in the human body and humanoid robotics. It differs from other common sensory receptors with its key features and functions, including the "no adaptation" and "sensitization" phenomena. In this study, we propose and experimentally demonstrate an artificial nociceptor based on a diffusive memristor with critical dynamics for the first time. Using this artificial nociceptor, we further built an artificial sensory alarm system to experimentally demonstrate the feasibility and simplicity of integrating such novel artificial nociceptor devices in artificial intelligence systems, such as humanoid robots.

  18. Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Linn, Douglas M. (Inventor); Mehling, Joshua S. (Inventor); Radford, Nicolaus A. (Inventor); Bridgwater, Lyndon (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Davis, Donald R. (Inventor); Diftler, Myron A. (Inventor); Platt, Robert (Inventor); hide

    2013-01-01

    A humanoid robot includes a torso, a pair of arms, two hands, a neck, and a head. The torso extends along a primary axis and presents a pair of shoulders. The pair of arms movably extend from a respective one of the pair of shoulders. Each of the arms has a plurality of arm joints. The neck movably extends from the torso along the primary axis. The neck has at least one neck joint. The head movably extends from the neck along the primary axis. The head has at least one head joint. The shoulders are canted toward one another at a shrug angle that is defined between each of the shoulders such that a workspace is defined between the shoulders.

  19. A Robotic Therapy Case Study: Developing Joint Attention Skills with a Student on the Autism Spectrum

    ERIC Educational Resources Information Center

    Charron, Nancy; Lewis, Lundy; Craig, Michael

    2017-01-01

    The purpose of this article is to describe a possible methodology for developing joint attention skills in students with autism spectrum disorder. Co-robot therapy with the humanoid robot NAO was used to foster a student's joint attention skill development; 20-min sessions conducted once weekly during the school year were video recorded and…

  20. When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions

    ERIC Educational Resources Information Center

    Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman

    2012-01-01

    In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…

  1. Development of haptic based piezoresistive artificial fingertip: Toward efficient tactile sensing systems for humanoids.

    PubMed

    TermehYousefi, Amin; Azhari, Saman; Khajeh, Amin; Hamidon, Mohd Nizar; Tanaka, Hirofumi

    2017-08-01

    Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load. Copyright © 2017. Published by Elsevier B.V.

  2. Motor contagion during human-human and human-robot interaction.

    PubMed

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  3. Motor Contagion during Human-Human and Human-Robot Interaction

    PubMed Central

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990

  4. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  5. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner.

    PubMed

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.

  6. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner

    PubMed Central

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967

  7. Posture Control-Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses.

    PubMed

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.

  8. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    PubMed Central

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428

  9. Development of a novel humanoid-robot simulator for endoscope with pharyngeal reflex and real-life responses.

    PubMed

    Ueki, Masaru; Uehara, Kazutake; Isomoto, Hajime

    2018-05-15

    In recent years, there has been a growing need for skills quantification of endoscopic specialist. Various educational simulators have been created to help increase the endoscopy performance of medical students and trainees. Recent research seems to show that the use of simulators helps increase the skill level of endoscopists, while improving patient safety 1, 2 . However, previous simulators lack sufficient realism and are unable to replicate natural human reactions during endoscopy or quantify endoscopic skills. We developed a novel humanoid-robot simulator (named mikoto ® ) with pharyngeal reflexes and real-life responses to endoscopy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Cortex Inspired Model for Inverse Kinematics Computation for a Humanoid Robotic Finger

    PubMed Central

    Gentili, Rodolphe J.; Oh, Hyuk; Molina, Javier; Reggia, James A.; Contreras-Vidal, José L.

    2013-01-01

    In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands. PMID:23366569

  11. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    PubMed

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  12. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  13. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  14. Designing a Robot for Cultural Brokering in Education

    ERIC Educational Resources Information Center

    Kim, Yanghee

    2016-01-01

    The increasing number of English language learning children in U.S. classrooms and the need for effective programs that support these children present a great challenge to the current educational paradigm. The challenge may be met, at least in part, by an innovative humanoid robot serving as a cultural broker that mediates collaborative…

  15. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  16. Arash: A social robot buddy to support children with cancer in a hospital environment.

    PubMed

    Meghdari, Ali; Shariati, Azadeh; Alemi, Minoo; Vossoughi, Gholamreza R; Eydi, Abdollah; Ahmadi, Ehsan; Mozafari, Behrad; Amoozandeh Nobaveh, Ali; Tahami, Reza

    2018-06-01

    This article presents the thorough design procedure, specifications, and performance of a mobile social robot friend Arash for educational and therapeutic involvement of children with cancer based on their interests and needs. Our research focuses on employing Arash in a pediatric hospital environment to entertain, assist, and educate children with cancer who suffer from physical pain caused by both the disease and its treatment process. Since cancer treatment causes emotional distress, which can reduce the efficiency of medications, using social robots to interact with children with cancer in a hospital environment could decrease this distress, thereby improving the effectiveness of their treatment. Arash is a 15 degree-of-freedom low-cost humanoid mobile robot buddy, carefully designed with appropriate measures and developed to interact with children ages 5-12 years old. The robot has five physical subsystems: the head, arms, torso, waist, and mobile-platform. The robot's final appearance is a significant novel concept; since it was selected based on a survey taken from 50 children with chronic diseases at three pediatric hospitals in Tehran, Iran. Founded on these measures and desires, Arash was designed, built, improved, and enhanced to operate successfully in pediatric cancer hospitals. Two experiments were devised to evaluate the children's level of acceptance and involvement with the robot, assess their feelings about it, and measure how much the robot was similar to the favored conceptual sketch. Both experiments were conducted in the form of storytelling and appearance/performance evaluations. The obtained results confirm high engagement and interest of pediatric cancer patients with the constructed robot.

  17. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface

    PubMed Central

    Kim, Youngmoo E.

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712

  18. Robots testing robots: ALAN-Arm, a humanoid arm for the testing of robotic rehabilitation systems.

    PubMed

    Brookes, Jack; Kuznecovs, Maksims; Kanakis, Menelaos; Grigals, Arturs; Narvidas, Mazvydas; Gallagher, Justin; Levesley, Martin

    2017-07-01

    Robotics is increasing in popularity as a method of providing rich, personalized and cost-effective physiotherapy to individuals with some degree of upper limb paralysis, such as those who have suffered a stroke. These robotic rehabilitation systems are often high powered, and exoskeletal systems can attach to the person in a restrictive manner. Therefore, ensuring the mechanical safety of these devices before they come in contact with individuals is a priority. Additionally, rehabilitation systems may use novel sensor systems to measure current arm position. Used to capture and assess patient movements, these first need to be verified for accuracy by an external system. We present the ALAN-Arm, a humanoid robotic arm designed to be used for both accuracy benchmarking and safety testing of robotic rehabilitation systems. The system can be attached to a rehabilitation device and then replay generated or human movement trajectories, as well as autonomously play rehabilitation games or activities. Tests of the ALAN-Arm indicated it could recreate the path of a generated slow movement path with a maximum error of 14.2mm (mean = 5.8mm) and perform cyclic movements up to 0.6Hz with low gain (<1.5dB). Replaying human data trajectories showed the ability to largely preserve human movement characteristics with slightly higher path length and lower normalised jerk.

  19. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface.

    PubMed

    Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.

  20. Effect of feedback from a socially interactive humanoid robot on reaching kinematics in children with and without cerebral palsy: A pilot study.

    PubMed

    Chen, Yuping; Garcia-Vergara, Sergio; Howard, Ayanna M

    2017-08-17

    To examine whether children with or without cerebral palsy (CP) would follow a humanoid robot's (i.e., Darwin) feedback to move their arm faster when playing virtual reality (VR) games. Seven children with mild CP and 10 able-bodied children participated. Real-time reaching was evaluated by playing the Super Pop VR TM system, including 2-game baseline, 3-game acquisition, and another 2-game extinction. During acquisition, Darwin provided verbal feedback to direct the child to reach a kinematically defined target goal (i.e., 80% of average movement time in baseline). Outcome variables included the percentage of successful reaches ("% successful reaches"), movement time (MT), average speed, path, and number of movement units. All games during acquisition and extinction had larger "%successful reaches," faster speeds, and faster MTs than the 2 games during baseline (p < .05). Children with and without CP could follow the robot's feedback for changing their reaching kinematics when playing VR games.

  1. How Developmental Psychology and Robotics Complement Each Other

    DTIC Science & Technology

    2000-01-01

    Breazeal, Marjanovic , Scassellati & Williamson), and a system for regulating interaction intensities (Breazeal & Scassellati) have also been implemented...have been previously reported (Scassellati; Scas- sellati; Brooks, Breazeal, Marjanovic , Scassellati & Williamson; Marjanovic et al.; Brooks, (Ferrell... Marjanovic , M., Scassel- lati, B. & Williamson, M. M. (1999), The Cog Project: Building a Humanoid Robot, in C. L. Nehaniv, ed., `Computation for

  2. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  3. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  4. State Estimation for Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    21 2.2.1 Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . 21 2.2.2 Planar Five-link Model...Linear Inverted Pendulum Model. LVDT Linear Variable Differential Transformers. MEMS Microelectromechanical Systems. MHE Moving Horizon Estimator. QP...

  5. iss031e148737

    NASA Image and Video Library

    2012-06-27

    ISS031-E-148737 (27 June 2012) --- European Space Agency astronaut Andre Kuipers, Expedition 31 flight engineer, poses for a photo with Robonaut 2 humanoid robot in the Destiny laboratory of the International Space Station.

  6. Reverse control for humanoid robot task recognition.

    PubMed

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  7. Pleasant to the Touch: By Emulating Nature, Scientists Hope to Find Innovative New Uses for Soft Robotics in Health-Care Technology.

    PubMed

    Cianchetti, Matteo; Laschi, Cecilia

    2016-01-01

    Open your Internet browser and search for videos showing the most advanced humanoid robots. Look at how they move and walk. Observe their motion and their interaction with the environment (the ground, users, target objects). Now, search for a video of your favorite sports player. Despite the undoubtedly great achievements of modern robotics, it will become quite evident that a lot of work still remains.

  8. A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict

    ERIC Educational Resources Information Center

    Mazzoni, Elvis; Benvenuti, Martina

    2015-01-01

    This paper presents an exploratory study in which a humanoid robot (MecWilly) acted as a partner to preschool children, helping them to learn English words. In order to use the Socio-Cognitive Conflict paradigm to induce the knowledge acquisition process, we designed a playful activity in which children worked in pairs with another child or with…

  9. Humanoid robot Lola: design and walking control.

    PubMed

    Buschmann, Thomas; Lohmeier, Sebastian; Ulbrich, Heinz

    2009-01-01

    In this paper we present the humanoid robot LOLA, its mechatronic hardware design, simulation and real-time walking control. The goal of the LOLA-project is to build a machine capable of stable, autonomous, fast and human-like walking. LOLA is characterized by a redundant kinematic configuration with 7-DoF legs, an extremely lightweight design, joint actuators with brushless motors and an electronics architecture using decentralized joint control. Special emphasis was put on an improved mass distribution of the legs to achieve good dynamic performance. Trajectory generation and control aim at faster, more flexible and robust walking. Center of mass trajectories are calculated in real-time from footstep locations using quadratic programming and spline collocation methods. Stabilizing control uses hybrid position/force control in task space with an inner joint position control loop. Inertial stabilization is achieved by modifying the contact force trajectories.

  10. Dexterous Humanoid Robotic Wrist

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Reich, David M. (Inventor); Wampler, II, Charles W. (Inventor); Askew, Scott R. (Inventor); Diftler, Myron A. (Inventor); Nguyen, Vienny (Inventor)

    2013-01-01

    A humanoid robot includes a torso, a pair of arms, a neck, a head, a wrist joint assembly, and a control system. The arms and the neck movably extend from the torso. Each of the arms includes a lower arm and a hand that is rotatable relative to the lower arm. The wrist joint assembly is operatively defined between the lower arm and the hand. The wrist joint assembly includes a yaw axis and a pitch axis. The pitch axis is disposed in a spaced relationship to the yaw axis such that the axes are generally perpendicular. The pitch axis extends between the yaw axis and the lower arm. The hand is rotatable relative to the lower arm about each of the yaw axis and the pitch axis. The control system is configured for determining a yaw angle and a pitch angle of the wrist joint assembly.

  11. A neural framework for organization and flexible utilization of episodic memory in cumulatively learning baby humanoids.

    PubMed

    Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2014-12-01

    Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one's past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the "now," multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.

  12. Emotion attribution to a non-humanoid robot in different social situations.

    PubMed

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour ("happiness" and "fear"), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.

  13. Emotion Attribution to a Non-Humanoid Robot in Different Social Situations

    PubMed Central

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot. PMID:25551218

  14. Pettit enters data in a laptop computer

    NASA Image and Video Library

    2012-03-13

    ISS030-E-142862 (13 March 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, enters data in a computer while working with Robonaut 2 humanoid robot in the Destiny laboratory of the International Space Station.

  15. Investigating Models of Social Development Using a Humanoid Robot

    DTIC Science & Technology

    1998-01-01

    robot interaction and cooper- and neural models of spinal motor neurons (Williamson ation (Takanishi, Hirano & Sato 1998, Morita, Shibuya 1996...etiology and behavioral manifestations of pervasive de- Individuals with autism tend to have normal sensory velopmental disorders such as autism and...grasp the implications of this information. Wlile interested in joint attention both as an explanation the deficits of autism certainly cover many other

  16. Robonaut 2 - The First Humanoid Robot in Space

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Radford, N. A.; Mehling, J. S.; Abdallah, M. E.; Bridgwater, L. B.; Sanders, A. M.; Askew, R. S.; Linn, D. M.; Yamokoski, J. D.; Permenter, F. A.; hide

    2010-01-01

    NASA and General Motors have developed the second generation Robonaut, Robonaut 2 or R2, and it is scheduled to arrive on the International Space Station in late 2010 and undergo initial testing in early 2011. This state of the art, dexterous, anthropomorphic robotic torso has significant technical improvements over its predecessor making it a far more valuable tool for astronauts. Upgrades include: increased force sensing, greater range of motion, higher bandwidth and improved dexterity. R2 s integrated mechatronics design results in a more compact and robust distributed control system with a faction of the wiring of the original Robonaut. Modularity is prevalent throughout the hardware and software along with innovative and layered approaches for sensing and control. The most important aspects of the Robonaut philosophy are clearly present in this latest model s ability to allow comfortable human interaction and in its design to perform significant work using the same hardware and interfaces used by people. The following describes the mechanisms, integrated electronics, control strategies and user interface that make R2 a promising addition to the Space Station and other environments where humanoid robots can assist people.

  17. Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia

    PubMed Central

    Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel

    2017-01-01

    The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a “control software” for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction. PMID:28713296

  18. Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia.

    PubMed

    Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel

    2017-01-01

    The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a "control software" for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction.

  19. Multi-function robots with speech interaction and emotion feedback

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Lou, Guanting; Ma, Mengchao

    2018-03-01

    Nowadays, the service robots have been applied in many public circumstances; however, most of them still don’t have the function of speech interaction, especially the function of speech-emotion interaction feedback. To make the robot more humanoid, Arduino microcontroller was used in this study for the speech recognition module and servo motor control module to achieve the functions of the robot’s speech interaction and emotion feedback. In addition, W5100 was adopted for network connection to achieve information transmission via Internet, providing broad application prospects for the robot in the area of Internet of Things (IoT).

  20. Biomimetic shoulder complex based on 3-PSS/S spherical parallel mechanism

    NASA Astrophysics Data System (ADS)

    Hou, Yulei; Hu, Xinzhe; Zeng, Daxing; Zhou, Yulin

    2015-01-01

    The application of the parallel mechanism is still limited in the humanoid robot fields, and the existing parallel humanoid robot joint has not yet been reflected the characteristics of the parallel mechanism completely, also failed to solve the problem, such as small workspace, effectively. From the structural and functional bionic point of view, a three degrees of freedom(DOFs) spherical parallel mechanism for the shoulder complex of the humanoid robot is presented. According to the structure and kinetic characteristics analysis of the human shoulder complex, 3-PSS/S(P for prismatic pair, S for spherical pair) is chosen as the original configuration for the shouder complex. Using genetic algorithm, the optimization of the 3-PSS/S spherical parallel mechanism is performed, and the orientation workspace of the prototype mechanism is enlarged obviously. Combining the practical structure characteristics of the human shouder complex, an offset output mode, which means the output rod of the mechanism turn to any direction at the point a certain distance from the rotation center of the mechanism, is put forward, which provide possibility for the consistent of the workspace of the mechanism and the actual motion space of the human body shoulder joint. The relationship of the attitude angles between different coordinate system is derived, which establishs the foundation for the motion descriptions under different conditions and control development. The 3-PSS/S spherical parallel mechanism is proposed for the shoulder complex, and the consistence of the workspace of the mechanism and the human shoulder complex is realized by the stuctural parameter optimization and the offset output design.

  1. Concurrent Path Planning with One or More Humanoid Robots

    NASA Technical Reports Server (NTRS)

    Reiland, Matthew J. (Inventor); Sanders, Adam M. (Inventor)

    2014-01-01

    A robotic system includes a controller and one or more robots each having a plurality of robotic joints. Each of the robotic joints is independently controllable to thereby execute a cooperative work task having at least one task execution fork, leading to multiple independent subtasks. The controller coordinates motion of the robot(s) during execution of the cooperative work task. The controller groups the robotic joints into task-specific robotic subsystems, and synchronizes motion of different subsystems during execution of the various subtasks of the cooperative work task. A method for executing the cooperative work task using the robotic system includes automatically grouping the robotic joints into task-specific subsystems, and assigning subtasks of the cooperative work task to the subsystems upon reaching a task execution fork. The method further includes coordinating execution of the subtasks after reaching the task execution fork.

  2. Balance Maintenance in High-Speed Motion of Humanoid Robot Arm-Based on the 6D Constraints of Momentum Change Rate

    PubMed Central

    Zhang, Da-song; Chu, Jian

    2014-01-01

    Based on the 6D constraints of momentum change rate (CMCR), this paper puts forward a real-time and full balance maintenance method for the humanoid robot during high-speed movement of its 7-DOF arm. First, the total momentum formula for the robot's two arms is given and the momentum change rate is defined by the time derivative of the total momentum. The author also illustrates the idea of full balance maintenance and analyzes the physical meaning of 6D CMCR and its fundamental relation to full balance maintenance. Moreover, discretization and optimization solution of CMCR has been provided with the motion constraint of the auxiliary arm's joint, and the solving algorithm is optimized. The simulation results have shown the validity and generality of the proposed method on the full balance maintenance in the 6 DOFs of the robot body under 6D CMCR. This method ensures 6D dynamics balance performance and increases abundant ZMP stability margin. The resulting motion of the auxiliary arm has large abundance in joint space, and the angular velocity and the angular acceleration of these joints lie within the predefined limits. The proposed algorithm also has good real-time performance. PMID:24883404

  3. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots

    PubMed Central

    Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736

  4. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots.

    PubMed

    Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

  5. Achieving Collaborative Interaction with a Humanoid Robot

    DTIC Science & Technology

    2003-01-01

    gestures will become more prevalent in the kinds of interactions we study. Gesturing is a natural part of human- human communication . It...to human communication . However, in human to human experiments, Tversky et al. observed a similar result and found that speakers took the

  6. Developmental Approach for Behavior Learning Using Primitive Motion Skills.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2018-05-01

    Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.

  7. Reducing children's pain and distress towards flu vaccinations: a novel and effective application of humanoid robotics.

    PubMed

    Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan

    2013-06-07

    Millions of children in North America receive an annual flu vaccination, many of whom are at risk of experiencing severe distress. Millions of children also use technologically advanced devices such as computers and cell phones. Based on this familiarity, we introduced another sophisticated device - a humanoid robot - to interact with children during their vaccination. We hypothesized that these children would experience less pain and distress than children who did not have this interaction. This was a randomized controlled study in which 57 children (30 male; age, mean±SD: 6.87±1.34 years) were randomly assigned to a vaccination session with a nurse who used standard administration procedures, or with a robot who was programmed to use cognitive-behavioral strategies with them while a nurse administered the vaccination. Measures of pain and distress were completed by children, parents, nurses, and researchers. Multivariate analyses of variance indicated that interaction with a robot during flu vaccination resulted in significantly less pain and distress in children according to parent, child, nurse, and researcher ratings with effect sizes in the moderate to high range (Cohen's d=0.49-0.90). This is the first study to examine the effectiveness of child-robot interaction for reducing children's pain and distress during a medical procedure. All measures of reduction were significant. These findings suggest that further research on robotics at the bedside is warranted to determine how they can effectively help children manage painful medical procedures. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. Developing Humanoid Robots for Real-World Environments

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Kuhlman, Michael; Assad, Chris; Keymeulen, Didier

    2008-01-01

    Humanoids are steadily improving in appearance and functionality demonstrated in controlled environments. To address the challenges of operation in the real-world, researchers have proposed the use of brain-inspired architectures for robot control, and the use of robot learning techniques that enable the robot to acquire and tune skills and behaviours. In the first part of the paper we introduce new concepts and results in these two areas. First, we present a cerebellum-inspired model that demonstrated efficiency in the sensory-motor control of anthropomorphic arms, and in gait control of dynamic walkers. Then, we present a set of new ideas related to robot learning, emphasizing the importance of developing teaching techniques that support learning. In the second part of the paper we propose the use in robotics of the iterative and incremental development methodologies, in the context of practical task-oriented applications. These methodologies promise to rapidly reach system-level integration, and to early identify system-level weaknesses to focus on. We apply this methodology in a task targeting the automated assembly of a modular structure using HOAP-2. We confirm this approach led to rapid development of a end-to-end capability, and offered guidance on which technologies to focus on for gradual improvement of a complete functional system. It is believed that providing Grand Challenge type milestones in practical task-oriented applications accelerates development. As a meaningful target in short-mid term we propose the 'IKEA Challenge', aimed at the demonstration of autonomous assembly of various pieces of furniture, from the box, following included written/drawn instructions.

  9. An affordable compact humanoid robot for Autism Spectrum Disorder interventions in children.

    PubMed

    Dickstein-Fischer, Laurie; Alexander, Elizabeth; Yan, Xiaoan; Su, Hao; Harrington, Kevin; Fischer, Gregory S

    2011-01-01

    Autism Spectrum Disorder impacts an ever-increasing number of children. The disorder is marked by social functioning that is characterized by impairment in the use of nonverbal behaviors, failure to develop appropriate peer relationships and lack of social and emotional exchanges. Providing early intervention through the modality of play therapy has been effective in improving behavioral and social outcomes for children with autism. Interacting with humanoid robots that provide simple emotional response and interaction has been shown to improve the communication skills of autistic children. In particular, early intervention and continuous care provide significantly better outcomes. Currently, there are no robots capable of meeting these requirements that are both low-cost and available to families of autistic children for in-home use. This paper proposes the piloting the use of robotics as an improved diagnostic and early intervention tool for autistic children that is affordable, non-threatening, durable, and capable of interacting with an autistic child. This robot has the ability to track the child with its 3 degree of freedom (DOF) eyes and 3-DOF head, open and close its 1-DOF beak and 1-DOF each eyelids, raise its 1-DOF each wings, play sound, and record sound. These attributes will give it the ability to be used for the diagnosis and treatment of autism. As part of this project, the robot and the electronic and control software have been developed, and integrating semi-autonomous interaction, teleoperation from a remote healthcare provider and initiating trials with children in a local clinic are in progress.

  10. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study

    PubMed Central

    Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna

    2018-01-01

    Background Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. Objective The aim of this study was to explore participants’ qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. Methods NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot’s head sensor. Evaluations were content-analyzed utilizing Boyatzis’ steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Results Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Conclusions Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. PMID:29724701

  11. The use of new technologies for nutritional education in primary schools: a pilot study.

    PubMed

    Rosi, A; Dall'Asta, M; Brighenti, F; Del Rio, D; Volta, E; Baroni, I; Nalin, M; Coti Zelati, M; Sanna, A; Scazzina, F

    2016-11-01

    The aim of this study was evaluating if the presence of a humanoid robot could improve the efficacy of a game-based, nutritional education intervention. This was a controlled, school-based pilot intervention carried out on fourth-grade school children (8-10 years old). A total of 112 children underwent a game-based nutritional educational lesson on the importance of carbohydrates. For one group (n = 58), the lesson was carried out by a nutritional educator, the Master of Taste (MT), whereas for another group, (n = 54) the Master of Taste was supported by a humanoid robot (MT + NAO). A third group of children (n = 33) served as control not receiving any lesson. The intervention efficacy was evaluated by questionnaires administered at the beginning and at the end of each intervention. The nutritional knowledge level was evaluated by the cultural-nutritional awareness factor (AF) score. A total of 290 questionnaires were analyzed. Both MT and MT + NAO interventions significantly increased nutritional knowledge. At the end of the study, children in the MT and MT + NAO group showed similar AF scores, and the AF scores of both intervention groups were significantly higher than the AF score of the control group. This study showed a significant increase in the nutritional knowledge of children involved in a game-based, single-lesson, educational intervention performed by a figure that has a background in food science. However, the presence of a humanoid robot to support this figure's teaching activity did not result in any significant learning improvement. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  12. Comparison of Human and Humanoid Robot Control of Upright Stance

    PubMed Central

    Peterka, Robert J.

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to ~1 Hz) dynamic characteristics of human stance control. These subsystems are 1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions, and 2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions were humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different. PMID:19665564

  13. Comparison of human and humanoid robot control of upright stance.

    PubMed

    Peterka, Robert J

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.

  14. Keep focussing: striatal dopamine multiple functions resolved in a single mechanism tested in a simulated humanoid robot

    PubMed Central

    Fiore, Vincenzo G.; Sperati, Valerio; Mannella, Francesco; Mirolli, Marco; Gurney, Kevin; Friston, Karl; Dolan, Raymond J.; Baldassarre, Gianluca

    2014-01-01

    The effects of striatal dopamine (DA) on behavior have been widely investigated over the past decades, with “phasic” burst firings considered as the key expression of a reward prediction error responsible for reinforcement learning. Less well studied is “tonic” DA, where putative functions include the idea that it is a regulator of vigor, incentive salience, disposition to exert an effort and a modulator of approach strategies. We present a model combining tonic and phasic DA to show how different outflows triggered by either intrinsically or extrinsically motivating stimuli dynamically affect the basal ganglia by impacting on a selection process this system performs on its cortical input. The model, which has been tested on the simulated humanoid robot iCub interacting with a mechatronic board, shows the putative functions ascribed to DA emerging from the combination of a standard computational mechanism coupled to a differential sensitivity to the presence of DA across the striatum. PMID:24600422

  15. Predictive Coding Strategies for Developmental Neurorobotics

    PubMed Central

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  16. Improving Cognitive Skills of the Industrial Robot

    NASA Astrophysics Data System (ADS)

    Bezák, Pavol

    2015-08-01

    At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.

  17. Project M: An Assessment of Mission Assumptions

    NASA Technical Reports Server (NTRS)

    Edwards, Alycia

    2010-01-01

    Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft

  18. Small-Group Technology-Assisted Instruction: Virtual Teacher and Robot Peer for Individuals with Autism Spectrum Disorder.

    PubMed

    Saadatzi, Mohammad Nasser; Pennington, Robert C; Welch, Karla C; Graham, James H

    2018-06-20

    The authors combined virtual reality technology and social robotics to develop a tutoring system that resembled a small-group arrangement. This tutoring system featured a virtual teacher instructing sight words, and included a humanoid robot emulating a peer. The authors used a multiple-probe design across word sets to evaluate the effects of the instructional package on the explicit acquisition and vicarious learning of sight words instructed to three children with autism spectrum disorder (ASD) and the robot peer. Results indicated that participants acquired, maintained, and generalized 100% of the words explicitly instructed to them, made fewer errors while learning the words common between them and the robot peer, and vicariously learned 94% of the words solely instructed to the robot.

  19. FE Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013708 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  20. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013710 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  1. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013714 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  2. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013712 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  3. KSC-2010-4382

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, a robotics engineer animates the dexterous humanoid astronaut helper, Robonaut (R2) for the participants at a media event hosted by NASA. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  4. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031125 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  5. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031124 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  6. Robonaut 2 in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-02

    ISS034-E-013990 (2 Jan. 2013) --- In the International Space Station’s Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  7. Generalisation, decision making, and embodiment effects in mental rotation: A neurorobotic architecture tested with a humanoid robot.

    PubMed

    Seepanomwan, Kristsana; Caligiore, Daniele; Cangelosi, Angelo; Baldassarre, Gianluca

    2015-12-01

    Mental rotation, a classic experimental paradigm of cognitive psychology, tests the capacity of humans to mentally rotate a seen object to decide if it matches a target object. In recent years, mental rotation has been investigated with brain imaging techniques to identify the brain areas involved. Mental rotation has also been investigated through the development of neural-network models, used to identify the specific mechanisms that underlie its process, and with neurorobotics models to investigate its embodied nature. Current models, however, have limited capacities to relate to neuro-scientific evidence, to generalise mental rotation to new objects, to suitably represent decision making mechanisms, and to allow the study of the effects of overt gestures on mental rotation. The work presented in this study overcomes these limitations by proposing a novel neurorobotic model that has a macro-architecture constrained by knowledge held on brain, encompasses a rather general mental rotation mechanism, and incorporates a biologically plausible decision making mechanism. The model was tested using the humanoid robot iCub in tasks requiring the robot to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot gained an enhanced capacity to generalise mental rotation to new objects and to express the possible effects of overt movements of the wrist on mental rotation. The model also represents a further step in the identification of the embodied neural mechanisms that may underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study.

    PubMed

    Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna; Andrade, Jackie

    2018-05-03

    Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. The aim of this study was to explore participants' qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot's head sensor. Evaluations were content-analyzed utilizing Boyatzis' steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. ©Joana Galvão Gomes da Silva, David J Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, Jackie Andrade. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.05.2018.

  9. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot.

    PubMed

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking.

  10. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot

    PubMed Central

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking. PMID:28951719

  11. Interactive language learning by robots: the transition from babbling to word forms.

    PubMed

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.

  12. Design and control of five fingered under-actuated robotic hand

    NASA Astrophysics Data System (ADS)

    Sahoo, Biswojit; Parida, Pramod Kumar

    2018-04-01

    Now a day's research regarding humanoid robots and its application in different fields (industry, household, rehabilitation and exploratory) is going on entire the globe. Among which a challenging topic is to design a dexterous robotic hand which not only can perform as a hand of a robot but also can be used in re habilitation. The basic key concern is a dexterous robot hand which can be able to mimic the function of biological hand to perform different operations. This thesis work is regarding design and control of a under-actuated robotic hand consisting of four under actuated fingers (index finger, middle finger, little finger and ring finger ) , a thumb and a dexterous palm which can copy the motions and grasp type of human hand which having 21degrees of freedom instead of 25Degree Of Freedom.

  13. What makes a robot 'social'?

    PubMed

    Jones, Raya A

    2017-08-01

    Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.

  14. Utilization of the NASA Robonaut as a Surgical Avatar in Telemedicine

    NASA Technical Reports Server (NTRS)

    Dean, Marc; Diftler, Myron

    2015-01-01

    The concept of teleoperated robotic surgery is not new; however, most of the work to date has utilized specialized robots designed for specific set of surgeries. This activity explores the use of a humanoid robot to perform surgical procedures using the same hand held instruments that a human surgeon employs. For this effort, the tele-operated Robonaut (R2) was selected due to its dexterity, its ability to perform a wide range of tasks, and its adaptability to changing environments. To evaluate this concept, a series of challenges was designed with the goal of assessing the feasibility of utilizing Robonaut as a telemedicine based surgical avatar.

  15. Development of autonomous eating mechanism for biomimetic robots

    NASA Astrophysics Data System (ADS)

    Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung

    2005-12-01

    Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.

  16. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    PubMed

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer

    PubMed Central

    Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan

    2012-01-01

    Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853

  18. Empowering Student Voice through Interactive Design and Digital Making

    ERIC Educational Resources Information Center

    Kim, Yanghee; Searle, Kristin

    2017-01-01

    Over the last two decades online technology and digital media have provided space for students to participate and express their voices. This paper further explores how new digital technologies, such as humanoid robots and wearable electronics, can be used to offer additional spaces where students' voices are heard. In these spaces, young students…

  19. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  20. Interacting With Robots to Investigate the Bases of Social Interaction.

    PubMed

    Sciutti, Alessandra; Sandini, Giulio

    2017-12-01

    Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.

  1. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  2. Measuring empathy for human and robot hand pain using electroencephalography.

    PubMed

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  3. Improving Grasp Skills Using Schema Structured Learning

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Grupen, ROderic A.; Fagg, Andrew H.

    2006-01-01

    Abstract In the control-based approach to robotics, complex behavior is created by sequencing and combining control primitives. While it is desirable for the robot to autonomously learn the correct control sequence, searching through the large number of potential solutions can be time consuming. This paper constrains this search to variations of a generalized solution encoded in a framework known as an action schema. A new algorithm, SCHEMA STRUCTURED LEARNING, is proposed that repeatedly executes variations of the generalized solution in search of instantiations that satisfy action schema objectives. This approach is tested in a grasping task where Dexter, the UMass humanoid robot, learns which reaching and grasping controllers maximize the probability of grasp success.

  4. Application of ultrasonic sensor for measuring distances in robotics

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  5. We're in This Together: Intentional Design of Social Relationships with AIED Systems

    ERIC Educational Resources Information Center

    Walker, Erin; Ogan, Amy

    2016-01-01

    Students' relationships with their peers, teachers, and communities influence the ways in which they approach learning activities and the degree to which they benefit from them. Learning technologies, ranging from humanoid robots to text-based prompts on a computer screen, have a similar social influence on students. We envision a future in which…

  6. Pedagogical and Technological Augmentation of Mobile Learning for Young Children Interactive Learning Environments

    ERIC Educational Resources Information Center

    Kim, Yanghee; Smith, Diantha

    2017-01-01

    The ubiquity and educational potential of mobile applications are well acknowledged. This paper proposes six theory-based, pedagogical strategies to guide interaction design of mobile apps for young children. Also, to augment the capabilities of mobile devices, we used a humanoid robot integrated with a smartphone and developed an English-learning…

  7. Using a Humanoid Robot to Develop a Dialogue-Based Interactive Learning Environment for Elementary Foreign Language Classrooms

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Chen, Gwo-Dong

    2010-01-01

    Elementary school is the critical stage during which the development of listening comprehension and oral abilities in language acquisition occur, especially with a foreign language. However, the current foreign language instructors often adopt one-way teaching, and the learning environment lacks any interactive instructional media with which to…

  8. Curiosity driven reinforcement learning for motion planning on humanoids

    PubMed Central

    Frank, Mikhail; Leitner, Jürgen; Stollenga, Marijn; Förster, Alexander; Schmidhuber, Jürgen

    2014-01-01

    Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment. PMID:24432001

  9. Creating the brain and interacting with the brain: an integrated approach to understanding the brain.

    PubMed

    Morimoto, Jun; Kawato, Mitsuo

    2015-03-06

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  10. Creating the brain and interacting with the brain: an integrated approach to understanding the brain

    PubMed Central

    Morimoto, Jun; Kawato, Mitsuo

    2015-01-01

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568

  11. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  12. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  13. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012573 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  14. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012571 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  15. Six-and-a-Half-Month-Old Children Positively Attribute Goals to Human Action and to Humanoid-Robot Motion

    ERIC Educational Resources Information Center

    Kamewari, K.; Kato, M.; Kanda, T.; Ishiguro, H.; Hiraki, K.

    2005-01-01

    Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation…

  16. Robonaut: A Robotic Astronaut Assistant

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Diftler, Myron A.

    2001-01-01

    NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.

  17. A survey on dielectric elastomer actuators for soft robots.

    PubMed

    Gu, Guo-Ying; Zhu, Jian; Zhu, Li-Min; Zhu, Xiangyang

    2017-01-23

    Conventional industrial robots with the rigid actuation technology have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing number of robots needing to interact with humans and unstructured environments, there is a need for soft robots capable of sustaining large deformation while inducing little pressure or damage when maneuvering through confined spaces. The emergence of soft robotics offers the prospect of applying soft actuators as artificial muscles in robots, replacing traditional rigid actuators. Dielectric elastomer actuators (DEAs) are recognized as one of the most promising soft actuation technologies due to the facts that: i) dielectric elastomers are kind of soft, motion-generating materials that resemble natural muscle of humans in terms of force, strain (displacement per unit length or area) and actuation pressure/density; ii) dielectric elastomers can produce large voltage-induced deformation. In this survey, we first introduce the so-called DEAs emphasizing the key points of working principle, key components and electromechanical modeling approaches. Then, different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed. Lastly, we summarize the challenges and opportunities for the further studies in terms of mechanism design, dynamics modeling and autonomous control.

  18. Performance and Usability of Various Robotic Arm Control Modes from Human Force Signals

    PubMed Central

    Mick, Sébastien; Cattaert, Daniel; Paclet, Florent; Oudeyer, Pierre-Yves; de Rugy, Aymar

    2017-01-01

    Elaborating an efficient and usable mapping between input commands and output movements is still a key challenge for the design of robotic arm prostheses. In order to address this issue, we present and compare three different control modes, by assessing them in terms of performance as well as general usability. Using an isometric force transducer as the command device, these modes convert the force input signal into either a position or a velocity vector, whose magnitude is linearly or quadratically related to force input magnitude. With the robotic arm from the open source 3D-printed Poppy Humanoid platform simulating a mobile prosthesis, an experiment was carried out with eighteen able-bodied subjects performing a 3-D target-reaching task using each of the three modes. The subjects were given questionnaires to evaluate the quality of their experience with each mode, providing an assessment of their global usability in the context of the task. According to performance metrics and questionnaire results, velocity control modes were found to perform better than position control mode in terms of accuracy and quality of control as well as user satisfaction and comfort. Subjects also seemed to favor quadratic velocity control over linear (proportional) velocity control, even if these two modes did not clearly distinguish from one another when it comes to performance and usability assessment. These results highlight the need to take into account user experience as one of the key criteria for the design of control modes intended to operate limb prostheses. PMID:29118699

  19. Synergy-Based Bilateral Port: A Universal Control Module for Tele-Manipulation Frameworks Using Asymmetric Master–Slave Systems

    PubMed Central

    Brygo, Anais; Sarakoglou, Ioannis; Grioli, Giorgio; Tsagarakis, Nikos

    2017-01-01

    Endowing tele-manipulation frameworks with the capability to accommodate a variety of robotic hands is key to achieving high performances through permitting to flexibly interchange the end-effector according to the task considered. This requires the development of control policies that not only cope with asymmetric master–slave systems but also whose high-level components are designed in a unified space in abstraction from the devices specifics. To address this dual challenge, a novel synergy port is developed that resolves the kinematic, sensing, and actuation asymmetries of the considered system through generating motion and force feedback references in the hardware-independent hand postural synergy space. It builds upon the concept of the Cartesian-based synergy matrix, which is introduced as a tool mapping the fingertips Cartesian space to the directions oriented along the grasp principal components. To assess the effectiveness of the proposed approach, the synergy port has been integrated into the control system of a highly asymmetric tele-manipulation framework, in which the 3-finger hand exoskeleton HEXOTRAC is used as a master device to control the SoftHand, a robotic hand whose transmission system relies on a single motor to drive all joints along a soft synergistic path. The platform is further enriched with the vision-based motion capture system Optitrack to monitor the 6D trajectory of the user’s wrist, which is used to control the robotic arm on which the SoftHand is mounted. Experiments have been conducted with the humanoid robot COMAN and the KUKA LWR robotic manipulator. Results indicate that this bilateral interface is highly intuitive and allows users with no prior experience to reach, grasp, and transport a variety of objects exhibiting very different shapes and impedances. In addition, the hardware and control solutions proved capable of accommodating users with different hand kinematics. Finally, the proposed control framework offers a universal, flexible, and intuitive interface allowing for the performance of effective tele-manipulations. PMID:28421179

  20. Synergy-Based Bilateral Port: A Universal Control Module for Tele-Manipulation Frameworks Using Asymmetric Master-Slave Systems.

    PubMed

    Brygo, Anais; Sarakoglou, Ioannis; Grioli, Giorgio; Tsagarakis, Nikos

    2017-01-01

    Endowing tele-manipulation frameworks with the capability to accommodate a variety of robotic hands is key to achieving high performances through permitting to flexibly interchange the end-effector according to the task considered. This requires the development of control policies that not only cope with asymmetric master-slave systems but also whose high-level components are designed in a unified space in abstraction from the devices specifics. To address this dual challenge, a novel synergy port is developed that resolves the kinematic, sensing, and actuation asymmetries of the considered system through generating motion and force feedback references in the hardware-independent hand postural synergy space. It builds upon the concept of the Cartesian-based synergy matrix, which is introduced as a tool mapping the fingertips Cartesian space to the directions oriented along the grasp principal components. To assess the effectiveness of the proposed approach, the synergy port has been integrated into the control system of a highly asymmetric tele-manipulation framework, in which the 3-finger hand exoskeleton HEXOTRAC is used as a master device to control the SoftHand, a robotic hand whose transmission system relies on a single motor to drive all joints along a soft synergistic path. The platform is further enriched with the vision-based motion capture system Optitrack to monitor the 6D trajectory of the user's wrist, which is used to control the robotic arm on which the SoftHand is mounted. Experiments have been conducted with the humanoid robot COMAN and the KUKA LWR robotic manipulator. Results indicate that this bilateral interface is highly intuitive and allows users with no prior experience to reach, grasp, and transport a variety of objects exhibiting very different shapes and impedances. In addition, the hardware and control solutions proved capable of accommodating users with different hand kinematics. Finally, the proposed control framework offers a universal, flexible, and intuitive interface allowing for the performance of effective tele-manipulations.

  1. Infant and Adult Perceptions of Possible and Impossible Body Movements: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Morita, Tomoyo; Slaughter, Virginia; Katayama, Nobuko; Kitazaki, Michiteru; Kakigi, Ryusuke; Itakura, Shoji

    2012-01-01

    This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N = 14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old…

  2. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Acquisition of Robotic Giant-swing Motion Using Reinforcement Learning and Its Consideration of Motion Forms

    NASA Astrophysics Data System (ADS)

    Sakai, Naoki; Kawabe, Naoto; Hara, Masayuki; Toyoda, Nozomi; Yabuta, Tetsuro

    This paper argues how a compact humanoid robot can acquire a giant-swing motion without any robotic models by using Q-Learning method. Generally, it is widely said that Q-Learning is not appropriated for learning dynamic motions because Markov property is not necessarily guaranteed during the dynamic task. However, we tried to solve this problem by embedding the angular velocity state into state definition and averaging Q-Learning method to reduce dynamic effects, although there remain non-Markov effects in the learning results. The result shows how the robot can acquire a giant-swing motion by using Q-Learning algorithm. The successful acquired motions are analyzed in the view point of dynamics in order to realize a functionally giant-swing motion. Finally, the result shows how this method can avoid the stagnant action loop at around the bottom of the horizontal bar during the early stage of giant-swing motion.

  4. Archaic man meets a marvellous automaton: posthumanism, social robots, archetypes.

    PubMed

    Jones, Raya

    2017-06-01

    Posthumanism is associated with critical explorations of how new technologies are rewriting our understanding of what it means to be human and how they might alter human existence itself. Intersections with analytical psychology vary depending on which technologies are held in focus. Social robotics promises to populate everyday settings with entities that have populated the imagination for millennia. A legend of A Marvellous Automaton appears as early as 350 B.C. in a book of Taoist teachings, and is joined by ancient and medieval legends of manmade humanoids coming to life, as well as the familiar robots of modern science fiction. However, while the robotics industry seems to be realizing an archetypal fantasy, the technology creates new social realities that generate distinctive issues of potential relevance for the theory and practice of analytical psychology. © 2017, The Society of Analytical Psychology.

  5. Motion Recognition and Modifying Motion Generation for Imitation Robot Based on Motion Knowledge Formation

    NASA Astrophysics Data System (ADS)

    Okuzawa, Yuki; Kato, Shohei; Kanoh, Masayoshi; Itoh, Hidenori

    A knowledge-based approach to imitation learning of motion generation for humanoid robots and an imitative motion generation system based on motion knowledge learning and modification are described. The system has three parts: recognizing, learning, and modifying parts. The first part recognizes an instructed motion distinguishing it from the motion knowledge database by the continuous hidden markov model. When the motion is recognized as being unfamiliar, the second part learns it using locally weighted regression and acquires a knowledge of the motion. When a robot recognizes the instructed motion as familiar or judges that its acquired knowledge is applicable to the motion generation, the third part imitates the instructed motion by modifying a learned motion. This paper reports some performance results: the motion imitation of several radio gymnastics motions.

  6. A developmental roadmap for learning by imitation in robots.

    PubMed

    Lopes, Manuel; Santos-Victor, José

    2007-04-01

    In this paper, we present a strategy whereby a robot acquires the capability to learn by imitation following a developmental pathway consisting on three levels: 1) sensory-motor coordination; 2) world interaction; and 3) imitation. With these stages, the system is able to learn tasks by imitating human demonstrators. We describe results of the different developmental stages, involving perceptual and motor skills, implemented in our humanoid robot, Baltazar. At each stage, the system's attention is drawn toward different entities: its own body and, later on, objects and people. Our main contributions are the general architecture and the implementation of all the necessary modules until imitation capabilities are eventually acquired by the robot. Also, several other contributions are made at each level: learning of sensory-motor maps for redundant robots, a novel method for learning how to grasp objects, and a framework for learning task description from observation for program-level imitation. Finally, vision is used extensively as the sole sensing modality (sometimes in a simplified setting) avoiding the need for special data-acquisition hardware.

  7. Assisted Perception, Planning and Control for Remote Mobility and Dexterous Manipulation

    DTIC Science & Technology

    2017-04-01

    on unmanned aerial vehicles (UAVs). The underlying algorithm is based on an Extended Kalman Filter (EKF) that simultaneously estimates robot state...and sensor biases. The filter developed provided a probabilistic fusion of sensor data from many modalities to produce a single consistent position...estimation for a walking humanoid. Given a prior map using a Gaussian particle filter , the LIDAR based system is able to provide a drift-free

  8. Foundations for a Theory of Mind for a Humanoid Robot

    DTIC Science & Technology

    2001-05-01

    visual processing software and our understanding of how people interact with Lazlo. Thanks also to Jessica Banks, Charlie Kemp, and Juan Velasquez for...capacities of infants (e.g., Carey, 1999; Gelman , 1990). Furthermore, research on pervasive developmental disorders such as autism has focused on the se...Keil, 1995; Carey, 1995; Gelman et al., 1983). While the discrimination of animate from inanimate certainly relies upon many distinct properties

  9. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences

    PubMed Central

    2016-01-01

    Emerging studies indicate that several species such as corvids, apes and children solve ‘The Crow and the Pitcher’ task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause–effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended ‘learning–prediction–abstraction’ loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration. PMID:27466440

  10. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences.

    PubMed

    Bhat, Ajaz Ahmad; Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2016-07-01

    Emerging studies indicate that several species such as corvids, apes and children solve 'The Crow and the Pitcher' task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause-effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended 'learning-prediction-abstraction' loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration. © 2016 The Author(s).

  11. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    NASA Astrophysics Data System (ADS)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  12. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  13. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture.

    PubMed

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2013-01-01

    The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value.

  14. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture

    PubMed Central

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2013-01-01

    The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value. PMID:23675345

  15. Robotics and artificial intelligence: Jewish ethical perspectives.

    PubMed

    Rappaport, Z H

    2006-01-01

    In 16th Century Prague, Rabbi Loew created a Golem, a humanoid made of clay, to protect his community. When the Golem became too dangerous to his surroundings, he was dismantled. This Jewish theme illustrates some of the guiding principles in its approach to the moral dilemmas inherent in future technologies, such as artificial intelligence and robotics. Man is viewed as having received the power to improve upon creation and develop technologies to achieve them, with the proviso that appropriate safeguards are taken. Ethically, not-harming is viewed as taking precedence over promoting good. Jewish ethical thinking approaches these novel technological possibilities with a cautious optimism that mankind will derive their benefits without coming to harm.

  16. Space Shuttle Discovery is Prepared for Launch

    NASA Image and Video Library

    2011-02-23

    The space shuttle Discovery is seen shortly after the Rotating Service Structure was rolled back at launch pad 39A, at the Kennedy Space Center in Cape Canaveral, Florida, on Wednesday, Feb. 23, 2011. Discovery, on its 39th and final flight, will carry the Italian-built Permanent Multipurpose Module (PMM), Express Logistics Carrier 4 (ELC4) and Robonaut 2, the first humanoid robot in space to the International Space Station. Photo Credit: (NASA/Bill Ingalls)

  17. Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver’s Gift

    DTIC Science & Technology

    2004-05-01

    system . We propose a real- time algorithm to infer depth and build 3-dimensional coarse maps for objects through the analysis of cues provided by an... system is well defined at the boundary of these regions (although the derivatives are not). A time domain analysis is presented for a piece-linear... Analysis of Multivariable Systems ......................... 266 D.3.1 Networks of Multiple Neural Oscillators ................. 266 D.3.2 Networks of

  18. Control strategies for robots in contact

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung

    In the field of robotics, there is a growing need to provide robots with the ability to interact with complex and unstructured environments. Operations in such environments pose significant challenges in terms of sensing, planning, and control. In particular, it is critical to design control algorithms that account for the dynamics of the robot and environment at multiple contacts. The work in this thesis focuses on the development of a control framework that addresses these issues. The approaches are based on the operational space control framework and estimation methods. By accounting for the dynamics of the robot and environment, modular and systematic methods are developed for robots interacting with the environment at multiple locations. The proposed force control approach demonstrates high performance in the presence of uncertainties. Building on this basic capability, new control algorithms have been developed for haptic teleoperation, multi-contact interaction with the environment, and whole body motion of non-fixed based robots. These control strategies have been experimentally validated through simulations and implementations on physical robots. The results demonstrate the effectiveness of the new control structure and its robustness to uncertainties. The contact control strategies presented in this thesis are expected to contribute to the needs in advanced controller design for humanoid and other complex robots interacting with their environments.

  19. Comparison of precision and speed in laparoscopic and robot-assisted surgical task performance.

    PubMed

    Zihni, Ahmed; Gerull, William D; Cavallo, Jaime A; Ge, Tianjia; Ray, Shuddhadeb; Chiu, Jason; Brunt, L Michael; Awad, Michael M

    2018-03-01

    Robotic platforms have the potential advantage of providing additional dexterity and precision to surgeons while performing complex laparoscopic tasks, especially for those in training. Few quantitative evaluations of surgical task performance comparing laparoscopic and robotic platforms among surgeons of varying experience levels have been done. We compared measures of quality and efficiency of Fundamentals of Laparoscopic Surgery task performance on these platforms in novices and experienced laparoscopic and robotic surgeons. Fourteen novices, 12 expert laparoscopic surgeons (>100 laparoscopic procedures performed, no robotics experience), and five expert robotic surgeons (>25 robotic procedures performed) performed three Fundamentals of Laparoscopic Surgery tasks on both laparoscopic and robotic platforms: peg transfer (PT), pattern cutting (PC), and intracorporeal suturing. All tasks were repeated three times by each subject on each platform in a randomized order. Mean completion times and mean errors per trial (EPT) were calculated for each task on both platforms. Results were compared using Student's t-test (P < 0.05 considered statistically significant). Among novices, greater errors were noted during laparoscopic PC (Lap 2.21 versus Robot 0.88 EPT, P < 0.001). Among expert laparoscopists, greater errors were noted during laparoscopic PT compared with robotic (PT: Lap 0.14 versus Robot 0.00 EPT, P = 0.04). Among expert robotic surgeons, greater errors were noted during laparoscopic PC compared with robotic (Lap 0.80 versus Robot 0.13 EPT, P = 0.02). Among expert laparoscopists, task performance was slower on the robotic platform compared with laparoscopy. In comparisons of expert laparoscopists performing tasks on the laparoscopic platform and expert robotic surgeons performing tasks on the robotic platform, expert robotic surgeons demonstrated fewer errors during the PC task (P = 0.009). Robotic assistance provided a reduction in errors at all experience levels for some laparoscopic tasks, but no benefit in the speed of task performance. Robotic assistance may provide some benefit in precision of surgical task performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    PubMed

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  1. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke.

    PubMed

    Johnson, Michelle J

    2006-12-18

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  2. The AGINAO Self-Programming Engine

    NASA Astrophysics Data System (ADS)

    Skaba, Wojciech

    2013-01-01

    The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.

  3. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke

    PubMed Central

    Johnson, Michelle J

    2006-01-01

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home. PMID:17176474

  4. Biomechanics of Step Initiation After Balance Recovery With Implications for Humanoid Robot Locomotion.

    PubMed

    Miller Buffinton, Christine; Buffinton, Elise M; Bieryla, Kathleen A; Pratt, Jerry E

    2016-03-01

    Balance-recovery stepping is often necessary for both a human and humanoid robot to avoid a fall by taking a single step or multiple steps after an external perturbation. The determination of where to step to come to a complete stop has been studied, but little is known about the strategy for initiation of forward motion from the static position following such a step. The goal of this study was to examine the human strategy for stepping by moving the back foot forward from a static, double-support position, comparing parameters from normal step length (SL) to those from increasing SLs to the point of step failure, to provide inspiration for a humanoid control strategy. Healthy young adults instrumented with joint reflective markers executed a prescribed-length step from rest while marker positions and ground reaction forces (GRFs) were measured. The participants were scaled to the Gait2354 model in opensim software to calculate body kinematic and joint kinetic parameters, with further post-processing in matlab. With increasing SL, participants reduced both static and push-off back-foot GRF. Body center of mass (CoM) lowered and moved forward, with additional lowering at the longer steps, and followed a path centered within the initial base of support (BoS). Step execution was successful if participants gained enough forward momentum at toe-off to move the instantaneous capture point (ICP) to within the BoS defined by the final position of both feet on the front force plate. All lower extremity joint torques increased with SL except ankle joint. Front knee work increased dramatically with SL, accompanied by decrease in back-ankle work. As SL increased, the human strategy changed, with participants shifting their CoM forward and downward before toe-off, thus gaining forward momentum, while using less propulsive work from the back ankle and engaging the front knee to straighten the body. The results have significance for human motion, suggesting the upper limit of the SL that can be completed with back-ankle push-off before additional knee flexion and torque is needed. For biped control, the results support stability based on capture-point dynamics and suggest strategy for center-of-mass trajectory and distribution of ground force reactions that can be compared with robot controllers for initiation of gait after recovery steps.

  5. Socially grounded game strategy enhances bonding and perceived smartness of a humanoid robot

    NASA Astrophysics Data System (ADS)

    Barakova, E. I.; De Haas, M.; Kuijpers, W.; Irigoyen, N.; Betancourt, A.

    2018-01-01

    In search for better technological solutions for education, we adapted a principle from economic game theory, namely that giving a help will promote collaboration and eventually long-term relations between a robot and a child. This principle has been shown to be effective in games between humans and between humans and computer agents. We compared the social and cognitive engagement of children when playing checkers game combined with a social strategy against a robot or against a computer. We found that by combining the social and game strategy the children (average age of 8.3 years) had more empathy and social engagement with the robot since the children did not want to necessarily win against it. This finding is promising for using social strategies for the creation of long-term relations between robots and children and making educational tasks more engaging. An additional outcome of the study was the significant difference in the perception of the children about the difficulty of the game - the game with the robot was seen as more challenging and the robot - as a smarter opponent. This finding might be due to the higher perceived or expected intelligence from the robot, or because of the higher complexity of seeing patterns in three-dimensional world.

  6. Progress in EEG-Based Brain Robot Interaction Systems

    PubMed Central

    Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe

    2017-01-01

    The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488

  7. Primate Anatomy, Kinematics, and Principles for Humanoid Design

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Ambrose, Catherine G.

    2004-01-01

    The primate order of animals is investigated for clues in the design of Humanoid Robots. The pursuit is directed with a theory that kinematics, musculature, perception, and cognition can be optimized for specific tasks by varying the proportions of limbs, and in particular, the points of branching in kinematic trees such as the primate skeleton. Called the Bifurcated Chain Hypothesis, the theory is that the branching proportions found in humans may be superior to other animals and primates for the tasks of dexterous manipulation and other human specialties. The primate taxa are defined, contemporary primate evolution hypotheses are critiqued, and variations within the order are noted. The kinematic branching points of the torso, limbs and fingers are studied for differences in proportions across the order, and associated with family and genus capabilities and behaviors. The human configuration of a long waist, long neck, and short arms is graded using a kinematic workspace analysis and a set of design axioms for mobile manipulation robots. It scores well. The re emergence of the human waist, seen in early Prosimians and Monkeys for arboreal balance, but lost in the terrestrial Pongidae, is postulated as benefiting human dexterity. The human combination of an articulated waist and neck will be shown to enable the use of smaller arms, achieving greater regions of workspace dexterity than the larger limbs of Gorillas and other Hominoidea.

  8. Elastic MCF Rubber with Photovoltaics and Sensing for Use as Artificial or Hybrid Skin (H-Skin): 1st Report on Dry-Type Solar Cell Rubber with Piezoelectricity for Compressive Sensing.

    PubMed

    Shimada, Kunio

    2018-06-05

    Ordinary solar cells are very difficult to bend, squash by compression, or extend by tensile strength. However, if they were to possess elastic, flexible, and extensible properties, in addition to piezo-electricity and resistivity, they could be put to effective use as artificial skin installed over human-like robots or humanoids. Further, it could serve as a husk that generates electric power from solar energy and perceives any force or temperature changes. Therefore, we propose a new type of artificial skin, called hybrid skin (H-Skin), for a humanoid robot having hybrid functions. In this study, a novel elastic solar cell is developed from natural rubber that is electrolytically polymerized with a configuration of magnetic clusters of metal particles incorporated into the rubber, by applying a magnetic field. The material thus produced is named magnetic compound fluid rubber (MCF rubber) that is elastic, flexible, and extensible. The present report deals with a dry-type MCF rubber solar cell that uses photosensitized dye molecules. First, the photovoltaic mechanism in the material is investigated. Next, the changes in the photovoltaic properties of its molecules due to irradiation by visible light are measured under compression. The effect of the compression on its piezoelectric properties is investigated.

  9. Human motion characteristics in relation to feeling familiar or frightened during an announced short interaction with a proactive humanoid.

    PubMed

    Baddoura, Ritta; Venture, Gentiane

    2014-01-01

    During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions.

  10. Seeing Minds in Others – Can Agents with Robotic Appearance Have Human-Like Preferences?

    PubMed Central

    Martini, Molly C.; Gonzalez, Christian A.; Wiese, Eva

    2016-01-01

    Ascribing mental states to non-human agents has been shown to increase their likeability and lead to better joint-task performance in human-robot interaction (HRI). However, it is currently unclear what physical features non-human agents need to possess in order to trigger mind attribution and whether different aspects of having a mind (e.g., feeling pain, being able to move) need different levels of human-likeness before they are readily ascribed to non-human agents. The current study addresses this issue by modeling how increasing the degree of human-like appearance (on a spectrum from mechanistic to humanoid to human) changes the likelihood by which mind is attributed towards non-human agents. We also test whether different internal states (e.g., being hungry, being alive) need different degrees of humanness before they are ascribed to non-human agents. The results suggest that the relationship between physical appearance and the degree to which mind is attributed to non-human agents is best described as a two-linear model with no change in mind attribution on the spectrum from mechanistic to humanoid robot, but a significant increase in mind attribution as soon as human features are included in the image. There seems to be a qualitative difference in the perception of mindful versus mindless agents given that increasing human-like appearance alone does not increase mind attribution until a certain threshold is reached, that is: agents need to be classified as having a mind first before the addition of more human-like features significantly increases the degree to which mind is attributed to that agent. PMID:26745500

  11. Human motion characteristics in relation to feeling familiar or frightened during an announced short interaction with a proactive humanoid

    PubMed Central

    Baddoura, Ritta; Venture, Gentiane

    2014-01-01

    During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions. PMID:24688466

  12. Robot Comedy Lab: experimenting with the social dynamics of live performance

    PubMed Central

    Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias

    2015-01-01

    The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts. PMID:26379585

  13. Examples of design and achievement of vision systems for mobile robotics applications

    NASA Astrophysics Data System (ADS)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  14. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  15. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  16. Hybrid position and orientation tracking for a passive rehabilitation table-top robot.

    PubMed

    Wojewoda, K K; Culmer, P R; Gallagher, J F; Jackson, A E; Levesley, M C

    2017-07-01

    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy.

  17. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  18. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  19. Future Challenges of Robotics and Artificial Intelligence in Nursing: What Can We Learn from Monsters in Popular Culture?

    PubMed

    Erikson, Henrik; Salzmann-Erikson, Martin

    It is highly likely that artificial intelligence (AI) will be implemented in nursing robotics in various forms, both in medical and surgical robotic instruments, but also as different types of droids and humanoids, physical reinforcements, and also animal/pet robots. Exploring and discussing AI and robotics in nursing and health care before these tools become commonplace is of great importance. We propose that monsters in popular culture might be studied with the hope of learning about situations and relationships that generate empathic capacities in their monstrous existences. The aim of the article is to introduce the theoretical framework and assumptions behind this idea. Both robots and monsters are posthuman creations. The knowledge we present here gives ideas about how nursing science can address the postmodern, technologic, and global world to come. Monsters therefore serve as an entrance to explore technologic innovations such as AI. Analyzing when and why monsters step out of character can provide important insights into the conceptualization of caring and nursing as a science, which is important for discussing these empathic protocols, as well as more general insight into human knowledge. The relationship between caring, monsters, robotics, and AI is not as farfetched as it might seem at first glance.

  20. Future Challenges of Robotics and Artificial Intelligence in Nursing: What Can We Learn from Monsters in Popular Culture?

    PubMed Central

    Erikson, Henrik; Salzmann-Erikson, Martin

    2016-01-01

    It is highly likely that artificial intelligence (AI) will be implemented in nursing robotics in various forms, both in medical and surgical robotic instruments, but also as different types of droids and humanoids, physical reinforcements, and also animal/pet robots. Exploring and discussing AI and robotics in nursing and health care before these tools become commonplace is of great importance. We propose that monsters in popular culture might be studied with the hope of learning about situations and relationships that generate empathic capacities in their monstrous existences. The aim of the article is to introduce the theoretical framework and assumptions behind this idea. Both robots and monsters are posthuman creations. The knowledge we present here gives ideas about how nursing science can address the postmodern, technologic, and global world to come. Monsters therefore serve as an entrance to explore technologic innovations such as AI. Analyzing when and why monsters step out of character can provide important insights into the conceptualization of caring and nursing as a science, which is important for discussing these empathic protocols, as well as more general insight into human knowledge. The relationship between caring, monsters, robotics, and AI is not as farfetched as it might seem at first glance. PMID:27455058

  1. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  2. [Supporting an ASD child with digital tools].

    PubMed

    Vallart, Etienne; Gicquel, Ludovic

    Autism spectrum disorders lead to a long-term and severe impairment of communication and social interactions. The expansion of information and communication technologies, through digital applications which can be used on different devices, can be used to support these functions necessary for the development of children with ASD. Applications, serious games and even humanoid robots help to boost children's interest in learning. They must however form part of a broader range of therapies. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  3. KSC-2010-4379

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, Ron Diftler, NASA Robonaut project manager, describes the animation of the dexterous humanoid astronaut helper, Robonaut (R2) to the media. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  4. KSC-2010-4378

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, Ron Diftler, NASA Robonaut project manager, describes the animation of the dexterous humanoid astronaut helper, Robonaut (R2) to the media. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  5. Future robotic platforms in urologic surgery: Recent Developments

    PubMed Central

    Herrell, S. Duke; Webster, Robert; Simaan, Nabil

    2014-01-01

    Purpose of review To review recent developments at Vanderbilt University of new robotic technologies and platforms designed for minimally invasive urologic surgery and their design rationale and potential roles in advancing current urologic surgical practice. Recent findings Emerging robotic platforms are being developed to improve performance of a wider variety of urologic interventions beyond the standard minimally invasive robotic urologic surgeries conducted presently with the da Vinci platform. These newer platforms are designed to incorporate significant advantages of robotics to improve the safety and outcomes of transurethral bladder surgery and surveillance, further decrease the invasiveness of interventions by advancing LESS surgery, and allow for previously impossible needle access and ablation delivery. Summary Three new robotic surgical technologies that have been developed at Vanderbilt University are reviewed, including a robotic transurethral system to enhance bladder surveillance and TURBT, a purpose-specific robotic system for LESS, and a needle sized robot that can be used as either a steerable needle or small surgeon-controlled micro-laparoscopic manipulator. PMID:24253803

  6. Robotic platform for traveling on vertical piping network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel

    This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.

  7. Robotics in biomedical chromatography and electrophoresis.

    PubMed

    Fouda, H G

    1989-08-11

    The ideal laboratory robot can be viewed as "an indefatigable assistant capable of working continuously for 24 h a day with constant efficiency". The development of a system approaching that promise requires considerable skill and time commitment, a thorough understanding of the capabilities and limitations of the robot and its specialized modules and an intimate knowledge of the functions to be automated. The robot need not emulate every manual step. Effective substitutes for difficult steps must be devised. The future of laboratory robots depends not only on technological advances in other fields, but also on the skill and creativity of chromatographers and other scientists. The robot has been applied to automate numerous biomedical chromatography and electrophoresis methods. The quality of its data can approach, and in some cases exceed, that of manual methods. Maintaining high data quality during continuous operation requires frequent maintenance and validation. Well designed robotic systems can yield substantial increase in the laboratory productivity without a corresponding increase in manpower. They can free skilled personnel from mundane tasks and can enhance the safety of the laboratory environment. The integration of robotics, chromatography systems and laboratory information management systems permits full automation and affords opportunities for unattended method development and for future incorporation of artificial intelligence techniques and the evolution of expert systems. Finally, humanoid attributes aside, robotic utilization in the laboratory should not be an end in itself. The robot is a useful tool that should be utilized only when it is prudent and cost-effective to do so.

  8. Miniature in vivo robotics and novel robotic surgical platforms.

    PubMed

    Shah, Bhavin C; Buettner, Shelby L; Lehman, Amy C; Farritor, Shane M; Oleynikov, Dmitry

    2009-05-01

    Robotic surgical systems, such as the da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, California), have revolutionized laparoscopic surgery but are limited by large size, increased costs, and limitations in imaging. Miniature in vivo robots are being developed that are inserted entirely into the peritoneal cavity for laparoscopic and natural orifice transluminal endoscopic surgical (NOTES) procedures. In the future, miniature camera robots and microrobots should be able to provide a mobile viewing platform. This article discusses the current state of miniature robotics and novel robotic surgical platforms and the development of future robotic technology for general surgery and urology.

  9. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  10. Humanoids Designed to do Work

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron

    2001-01-01

    NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.

  11. Referral of sensation to an advanced humanoid robotic hand prosthesis.

    PubMed

    Rosén, Birgitta; Ehrsson, H Henrik; Antfolk, Christian; Cipriani, Christian; Sebelius, Fredrik; Lundborg, Göran

    2009-01-01

    Hand prostheses that are currently available on the market are used by amputees to only a limited extent, partly because of lack of sensory feedback from the artificial hand. We report a pilot study that showed how amputees can experience a robot-like advanced hand prosthesis as part of their own body. We induced a perceptual illusion by which touch applied to the stump of the arm was experienced from the artificial hand. This illusion was elicited by applying synchronous tactile stimulation to the hidden amputation stump and the robotic hand prosthesis in full view. In five people who had had upper limb amputations this stimulation caused referral touch sensation from the stump to the artificial hand, and the prosthesis was experienced more like a real hand. We also showed that this illusion can work when the amputee controls the movements of the artificial hand by recordings of the arm muscle activity with electromyograms. These observations indicate that the previously described "rubber hand illusion" is also valid for an advanced hand prosthesis, even when it has a robotic-like appearance.

  12. Reading sadness beyond human faces.

    PubMed

    Chammat, Mariam; Foucher, Aurélie; Nadel, Jacqueline; Dubal, Stéphanie

    2010-08-12

    Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Robotic Fish to Aid Animal Behavior Studies and Informal Science Learning

    NASA Astrophysics Data System (ADS)

    Phamduy, Paul

    The application of robotic fish in the fields of animal behavior and informal science learning are new and relatively untapped. In the context of animal behavior studies, robotic fish offers a consistent and customizable stimulus that could contribute to dissect the determinants of social behavior. In the realm of informal science learning, robotic fish are gaining momentum for the possibility of educating the general public simultaneously on fish physiology and underwater robotics. In this dissertation, the design and development of a number of robotic fish platforms and prototypes and their application in animal behavioral studies and informal science learning settings are presented. Robotic platforms for animal behavioral studies focused on the utilization replica or same scale prototypes. A novel robotic fish platform, featuring a three-dimensional swimming multi-linked robotic fish, was developed with three control modes varying in the level of robot autonomy offered. This platform was deployed at numerous science festivals and science centers, to obtain data on visitor engagement and experience.

  14. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  15. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions

    PubMed Central

    Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris

    2012-01-01

    Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639

  16. USA Science and Engineering Festival 2014

    NASA Image and Video Library

    2014-04-25

    An attendee of the USA Science and Engineering Festival observes Robonaut 2 at the NASA Stage. Robonaut 2 is NASA's first dexterous humanoid robot that has been working on the International Space Station for the last three years. R2 recently received 1.2 meter long legs to allow mobility. This will enable R2 to assist more with regular and repetitive tasks inside and outside the station. The USA Science and Engineering Festival took place at the Washington Convention Center in Washington, DC on April 26 and 27, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  17. USA Science and Engineering Festival 2014

    NASA Image and Video Library

    2014-04-25

    Two boys attending the USA Science and Engineering Festival pose with Robonaut 2 at the NASA Stage. Robonaut 2 is NASA's first dexterous humanoid robot that has been working on the International Space Station for the last three years. R2 recently received 1.2 meter long legs to allow mobility. This will enable R2 to assist more with regular and repetitive tasks inside and outside the station. The USA Science and Engineering Festival took place at the Washington Convention Center in Washington, DC on April 26 and 27, 2014. Photo Credit: (NASA/Aubrey Gemignani)

  18. Review of emerging surgical robotic technology.

    PubMed

    Peters, Brian S; Armijo, Priscila R; Krause, Crystal; Choudhury, Songita A; Oleynikov, Dmitry

    2018-04-01

    The use of laparoscopic and robotic procedures has increased in general surgery. Minimally invasive robotic surgery has made tremendous progress in a relatively short period of time, realizing improvements for both the patient and surgeon. This has led to an increase in the use and development of robotic devices and platforms for general surgery. The purpose of this review is to explore current and emerging surgical robotic technologies in a growing and dynamic environment of research and development. This review explores medical and surgical robotic endoscopic surgery and peripheral technologies currently available or in development. The devices discussed here are specific to general surgery, including laparoscopy, colonoscopy, esophagogastroduodenoscopy, and thoracoscopy. Benefits and limitations of each technology were identified and applicable future directions were described. A number of FDA-approved devices and platforms for robotic surgery were reviewed, including the da Vinci Surgical System, Sensei X Robotic Catheter System, FreeHand 1.2, invendoscopy E200 system, Flex® Robotic System, Senhance, ARES, the Single-Port Instrument Delivery Extended Research (SPIDER), and the NeoGuide Colonoscope. Additionally, platforms were reviewed which have not yet obtained FDA approval including MiroSurge, ViaCath System, SPORT™ Surgical System, SurgiBot, Versius Robotic System, Master and Slave Transluminal Endoscopic Robot, Verb Surgical, Miniature In Vivo Robot, and the Einstein Surgical Robot. The use and demand for robotic medical and surgical platforms is increasing and new technologies are continually being developed. New technologies are increasingly implemented to improve on the capabilities of previously established systems. Future studies are needed to further evaluate the strengths and weaknesses of each robotic surgical device and platform in the operating suite.

  19. In vivo miniature robots for natural orifice surgery: State of the art and future perspectives.

    PubMed

    Tiwari, Manish M; Reynoso, Jason F; Lehman, Amy C; Tsang, Albert W; Farritor, Shane M; Oleynikov, Dmitry

    2010-06-27

    Natural orifice translumenal endoscopic surgery (NOTES) is the integration of laparoscopic minimally invasive surgery techniques with endoscopic technology. Despite the advances in NOTES technology, the approach presents several unique instrumentation and technique-specific challenges. Current flexible endoscopy platforms for NOTES have several drawbacks including limited stability, triangulation and dexterity, and lack of adequate visualization, suggesting the need for new and improved instrumentation for this approach. Much of the current focus is on the development of flexible endoscopy platforms that incorporate robotic technology. An alternative approach to access the abdominal viscera for either a laparoscopic or NOTES procedure is the use of small robotic devices that can be implanted in an intracorporeal manner. Multiple, independent, miniature robots can be simultaneously inserted into the abdominal cavity to provide a robotic platform for NOTES surgery. The capabilities of the robots include imaging, retraction, tissue and organ manipulation, and precise maneuverability in the abdominal cavity. Such a platform affords several advantages including enhanced visualization, better surgical dexterity and improved triangulation for NOTES. This review discusses the current status and future perspectives of this novel miniature robotics platform for the NOTES approach. Although these technologies are still in pre-clinical development, a miniature robotics platform provides a unique method for addressing the limitations of minimally invasive surgery, and NOTES in particular.

  20. Securing Safety with Sensors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Robot Systems Technology Branch at NASA's Johnson Space Center collaborated with the Defense Advanced Research Projects Agency to design Robonaut, a humanoid robot developed to assist astronauts with Extra Vehicular Activities (EVA) such as space structure assembly and repair operations. By working side-by-side with astronauts or going where risks are too great for people, Robonaut is expected to expand the Space Agency s ability for construction and discovery. NASA engineers equipped Robonaut with human-looking, dexterous hands complete with five fingers to accomplish its tasks. The Robonaut hand is one of the first being developed for space EVA use and is the closest in size and capability to a suited astronaut s hand. As part of the development process, an advanced sensor system was needed to provide an improved method to measure the movement and forces exerted by Robonaut s forearms and hands.

  1. Modeling and Classifying Six-Dimensional Trajectories for Teleoperation Under a Time Delay

    NASA Technical Reports Server (NTRS)

    SunSpiral, Vytas; Wheeler, Kevin R.; Allan, Mark B.; Martin, Rodney

    2006-01-01

    Within the context of teleoperating the JSC Robonaut humanoid robot under 2-10 second time delays, this paper explores the technical problem of modeling and classifying human motions represented as six-dimensional (position and orientation) trajectories. A dual path research agenda is reviewed which explored both deterministic approaches and stochastic approaches using Hidden Markov Models. Finally, recent results are shown from a new model which represents the fusion of these two research paths. Questions are also raised about the possibility of automatically generating autonomous actions by reusing the same predictive models of human behavior to be the source of autonomous control. This approach changes the role of teleoperation from being a stand-in for autonomy into the first data collection step for developing generative models capable of autonomous control of the robot.

  2. How long did it last? You would better ask a human

    PubMed Central

    Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694

  3. How long did it last? You would better ask a human.

    PubMed

    Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.

  4. New technologies in robotic surgery: the Korean experience.

    PubMed

    Tuliao, Patrick H; Kim, Sang W; Rha, Koon H

    2014-01-01

    The development of the robotic systems has made surgery an increasingly technology-driven field. Since the introduction of the first robotic platform in 2005, surgical practice in South Korea has also been caught up in the global robotic revolution. Consequently, a market focused on improving the robotic systems was created and Korea has emerged as one of its frontrunners. This article reviews the Korean experience in developing various robotic technologies and then Korea's most recent contributions to the development of new technologies in robotic surgery. The goal of new technologies in the field of robotic surgery has been to improve on the current platforms by eliminating their disadvantages. The pressing goal is to develop a platform that is less bulky, more ergonomic, and capable of providing force feedback to the surgeon. In Korea, the Lapabot and two new robotic systems for single-port laparoscopic surgery are the most recent advances that have been reported. Robotic surgery is rapidly evolving and Korea has stayed in the forefront of its development. These new advancements in technology will eventually produce better robotic platforms that will greatly improve the manner in which surgical care is delivered.

  5. Modular Countermine Payload for Small Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman Herman; Doug Few; Roelof Versteeg

    2010-04-01

    Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processormore » that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multi-mission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.« less

  6. Modular countermine payload for small robots

    NASA Astrophysics Data System (ADS)

    Herman, Herman; Few, Doug; Versteeg, Roelof; Valois, Jean-Sebastien; McMahill, Jeff; Licitra, Michael; Henciak, Edward

    2010-04-01

    Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.

  7. Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

    PubMed Central

    Taniguchi, Akira; Taniguchi, Tadahiro; Cangelosi, Angelo

    2017-01-01

    In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. PMID:29311888

  8. Towards Machine Learning of Motor Skills

    NASA Astrophysics Data System (ADS)

    Peters, Jan; Schaal, Stefan; Schölkopf, Bernhard

    Autonomous robots that can adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

  9. Miniature surgical robot for laparoendoscopic single-incision colectomy.

    PubMed

    Wortman, Tyler D; Meyer, Avishai; Dolghi, Oleg; Lehman, Amy C; McCormick, Ryan L; Farritor, Shane M; Oleynikov, Dmitry

    2012-03-01

    This study aimed to demonstrate the effectiveness of using a multifunctional miniature in vivo robotic platform to perform a single-incision colectomy. Standard laparoscopic techniques require multiple ports. A miniature robotic platform to be inserted completely into the peritoneal cavity through a single incision has been designed and built. The robot can be quickly repositioned, thus enabling multiquadrant access to the abdominal cavity. The miniature in vivo robotic platform used in this study consists of a multifunctional robot and a remote surgeon interface. The robot is composed of two arms with shoulder and elbow joints. Each forearm is equipped with specialized interchangeable end effectors (i.e., graspers and monopolar electrocautery). Five robotic colectomies were performed in a porcine model. For each procedure, the robot was completely inserted into the peritoneal cavity, and the surgeon manipulated the user interface to control the robot to perform the colectomy. The robot mobilized the colon from its lateral retroperitoneal attachments and assisted in the placement of a standard stapler to transect the sigmoid colon. This objective was completed for all five colectomies without any complications. The adoption of both laparoscopic and single-incision colectomies currently is constrained by the inadequacies of existing instruments. The described multifunctional robot provides a platform that overcomes existing limitations by operating completely within one incision in the peritoneal cavity and by improving visualization and dexterity. By repositioning the small robot to the area of the colon to be mobilized, the ability of the surgeon to perform complex surgical tasks is improved. Furthermore, the success of the robot in performing a completely in vivo colectomy suggests the feasibility of using this robotic platform to perform other complex surgeries through a single incision.

  10. In our own image? Emotional and neural processing differences when observing human–human vs human–robot interactions

    PubMed Central

    Wang, Yin

    2015-01-01

    Notwithstanding the significant role that human–robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human–human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal–parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. PMID:25911418

  11. Robotics in endoscopy.

    PubMed

    Klibansky, David; Rothstein, Richard I

    2012-09-01

    The increasing complexity of intralumenal and emerging translumenal endoscopic procedures has created an opportunity to apply robotics in endoscopy. Computer-assisted or direct-drive robotic technology allows the triangulation of flexible tools through telemanipulation. The creation of new flexible operative platforms, along with other emerging technology such as nanobots and steerable capsules, can be transformational for endoscopic procedures. In this review, we cover some background information on the use of robotics in surgery and endoscopy, and review the emerging literature on platforms, capsules, and mini-robotic units. The development of techniques in advanced intralumenal endoscopy (endoscopic mucosal resection and endoscopic submucosal dissection) and translumenal endoscopic procedures (NOTES) has generated a number of novel platforms, flexible tools, and devices that can apply robotic principles to endoscopy. The development of a fully flexible endoscopic surgical toolkit will enable increasingly advanced procedures to be performed through natural orifices. The application of platforms and new flexible tools to the areas of advanced endoscopy and NOTES heralds the opportunity to employ useful robotic technology. Following the examples of the utility of robotics from the field of laparoscopic surgery, we can anticipate the emerging role of robotic technology in endoscopy.

  12. Modeling and Simulation for a Surf Zone Robot

    DTIC Science & Technology

    2012-12-14

    of-freedom surf zone robot is developed and tested with a physical test platform and with a simulated robot in Robot Operating System . Derived from...terrain. The application of the model to future platforms is analyzed and a broad examination of the current state of surf zone robotic systems is...public release; distribution is unlimited MODELING AND SIMULATION FOR A SURF ZONE ROBOT Eric Shuey Lieutenant, United States Navy B.S., Systems

  13. Robotic single-access splenectomy using the Da Vinci Single-Site® platform: a case report.

    PubMed

    Corcione, Francesco; Bracale, Umberto; Pirozzi, Felice; Cuccurullo, Diego; Angelini, Pier Luigi

    2014-03-01

    Single-access laparoscopic splenectomy can offer patients some advantages. It has many difficulties, such as instrument clashing, lack of triangulation, odd angles and lack of space. The Da Vinci Single-Site® robotic surgery platform could decrease these difficulties. We present a case of single-access robotic splenectomy using this device. A 37 year-old female with idiopathic thrombocytopenic purpura was operated on with a single-site approach, using the Da Vinci Single-Site robotic surgery device. The procedure was successfully completed in 140 min. No intraoperative and postoperative complications occurred. The patient was discharged from hospital on day 3. Single-access robotic splenectomy seems to be feasible and safe using the new robotic single-access platform, which seems to overcome certain limits of previous robotic or conventional single-access laparoscopy. We think that additional studies should also be performed to explore the real cost-effectiveness of the platform. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Ethica ex machina: issues in roboethics.

    PubMed

    Mushiaki, Shigeru

    2013-12-01

    Is "roboethics" the "ethics of humans" or the "ethics of robots"? According to the Roboethics Roadmap (Gianmarco Veruggio), it is the human ethics of robot designers, manufacturers, and users. And ifroboethics roots deeply in society, artificial ethics (ethics of robots) might be put on the agenda some day. At the 1st International Symposium on Roboethics in San Remo, Ronald C. Arkin gave the presentation "Bombs, Bonding, and Bondage: Human-Robot Interaction and Related Ethical Issues" (2004). "Bondage" is the issue of enslavement and possible rebellion of robots. "Bombs" is the issue of military use of robots. And "bonding" is the issue of affective, emotional attachment of humans to robots. I contrast two extreme attitudes towards the issue of "bonding" and propose a middle ground. "Anthropomorphism" has two meanings. First, it means "human-shaped-ness." Second, it means "attribution of human characteristics or feelings to a nonhuman being (god, animal, or object)" (personification, empathy). Some say that Japanese (or East Asians) hold "animism," which makes it easy for them to treat robots like animated beings (to anthropomorphize robots); hence "Robot Kingdom Japan." Cosima Wagner criticizes such exaggeration and oversimplification as "invented tradition". I reinforce her argument with neuroscientific findings and argue that such "animism" is neither Shintoistic nor Buddhistic, but a universal tendency. Roboticists, especially Japanese roboticists emphasize that robotics is "anthropology." It is true that through the construction of humanoid robots we can better understand human beings (so-called "constructive approach"). But at the same time, we must not forget that robotic technology, like any other technology, changes our way of living and being--deeply: it can bring about our ontological transformation. In this sense, the governance of robotic technology is "governed governance." The interdisciplinary research area of technology assessment studies (TAS) will gain much importance. And we should always be ready to rethink the direction of the research and development of robotic technology, bearing the desirable future of human society in mind.

  15. Mobile robot knowledge base

    NASA Astrophysics Data System (ADS)

    Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory

    2005-05-01

    Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.

  16. Gradient Learning Algorithms for Ontology Computing

    PubMed Central

    Gao, Wei; Zhu, Linli

    2014-01-01

    The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752

  17. Tactile Gloves for Autonomous Grasping With the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Martin, T. B.; Ambrose, R. O.; Diftler, M. A.; Platt, R., Jr.; Butzer, M. J.

    2004-01-01

    Tactile data from rugged gloves are providing the foundation for developing autonomous grasping skills for the NASA/DARPA Robonaut, a dexterous humanoid robot. These custom gloves compliment the human like dexterity available in the Robonaut hands. Multiple versions of the gloves are discussed, showing a progression in using advanced materials and construction techniques to enhance sensitivity and overall sensor coverage. The force data provided by the gloves can be used to improve dexterous, tool and power grasping primitives. Experiments with the latest gloves focus on the use of tools, specifically a power drill used to approximate an astronaut's torque tool.

  18. A pilot study of surgical training using a virtual robotic surgery simulator.

    PubMed

    Tergas, Ana I; Sheth, Sangini B; Green, Isabel C; Giuntoli, Robert L; Winder, Abigail D; Fader, Amanda N

    2013-01-01

    Our objectives were to compare the utility of learning a suturing task on the virtual reality da Vinci Skills Simulator versus the da Vinci Surgical System dry laboratory platform and to assess user satisfaction among novice robotic surgeons. Medical trainees were enrolled prospectively; one group trained on the virtual reality simulator, and the other group trained on the da Vinci dry laboratory platform. Trainees received pretesting and post-testing on the dry laboratory platform. Participants then completed an anonymous online user experience and satisfaction survey. We enrolled 20 participants. Mean pretest completion times did not significantly differ between the 2 groups. Training with either platform was associated with a similar decrease in mean time to completion (simulator platform group, 64.9 seconds [P = .04]; dry laboratory platform group, 63.9 seconds [P < .01]). Most participants (58%) preferred the virtual reality platform. The majority found the training "definitely useful" in improving robotic surgical skills (mean, 4.6) and would attend future training sessions (mean, 4.5). Training on the virtual reality robotic simulator or the dry laboratory robotic surgery platform resulted in significant improvements in time to completion and economy of motion for novice robotic surgeons. Although there was a perception that both simulators improved performance, there was a preference for the virtual reality simulator. Benefits unique to the simulator platform include autonomy of use, computerized performance feedback, and ease of setup. These features may facilitate more efficient and sophisticated simulation training above that of the conventional dry laboratory platform, without loss of efficacy.

  19. New insights into olivo-cerebellar circuits for learning from a small training sample.

    PubMed

    Tokuda, Isao T; Hoang, Huu; Kawato, Mitsuo

    2017-10-01

    Artificial intelligence such as deep neural networks exhibited remarkable performance in simulated video games and 'Go'. In contrast, most humanoid robots in the DARPA Robotics Challenge fell down to ground. The dramatic contrast in performance is mainly due to differences in the amount of training data, which is huge and small, respectively. Animals are not allowed with millions of the failed trials, which lead to injury and death. Humans fall only several thousand times before they balance and walk. We hypothesize that a unique closed-loop neural circuit formed by the Purkinje cells, the cerebellar deep nucleus and the inferior olive in and around the cerebellum and the highest density of gap junctions, which regulate synchronous activities of the inferior olive nucleus, are computational machinery for learning from a small sample. We discuss recent experimental and computational advances associated with this hypothesis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Teleoperation of Robonaut Using Finger Tracking

    NASA Technical Reports Server (NTRS)

    Champoux, Rachel G.; Luo, Victor

    2012-01-01

    With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.

  1. Robotic hand with locking mechanism using TCP muscles for applications in prosthetic hand and humanoids

    NASA Astrophysics Data System (ADS)

    Saharan, Lokesh; Tadesse, Yonas

    2016-04-01

    This paper presents a biomimetic, lightweight, 3D printed and customizable robotic hand with locking mechanism consisting of Twisted and Coiled Polymer (TCP) muscles based on nylon precursor fibers as artificial muscles. Previously, we have presented a small-sized biomimetic hand using nylon based artificial muscles and fishing line muscles as actuators. The current study focuses on an adult-sized prosthetic hand with improved design and a position/force locking system. Energy efficiency is always a matter of concern to make compact, lightweight, durable and cost effective devices. In natural human hand, if we keep holding objects for long time, we get tired because of continuous use of energy for keeping the fingers in certain positions. Similarly, in prosthetic hands we also need to provide energy continuously to artificial muscles to hold the object for a certain period of time, which is certainly not energy efficient. In this work we, describe the design of the robotic hand and locking mechanism along with the experimental results on the performance of the locking mechanism.

  2. Investigation on Requirements of Robotic Platforms to Teach Social Skills to Individuals with Autism

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Chris; Kuester, Deitra; Sheehan, Mark; Dhanya, Sneha

    This paper reports on some of the robotic platforms used in the project AUROSO which investigates the use of robots as educationally useful interventions to improve social interactions for individuals with Autism Spectrum Disorders (ASD). Our approach to treatment uses an educational intervention based on Socially Assistive Robotics (SAR), the DIR/Floortime intervention model and social script/stories. Requirements are established and a variety of robotic models/platforms were investigated as to the feasibility of an economical, practical and efficient means of helping teach social skills to individuals with ASD for use by teachers, families, service providers and other community organizations.

  3. Generic robot architecture

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  4. Autonomous Exploration Using an Information Gain Metric

    DTIC Science & Technology

    2016-03-01

    implemented on 2 different robotic platforms: the PackBot designed by iRobot and the Jackal designed by Clearpath Robotics. The PackBot, shown in Fig. 1, is a... Jackal is a wheeled, man-portable robot system. Both robots were equipped with a Hokuyo UTM-30LX-EW scanning laser range finder with a motor...Fig. 2, the robot was used to explore and map the second floor of a building located in a military and rescue training facility. The Jackal platform

  5. Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.

    PubMed

    Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela

    2015-12-01

    There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Comparison of the LEGO Mindstorms NXT and EV3 Robotics Education Platforms

    ERIC Educational Resources Information Center

    Sherrard, Ann; Rhodes, Amy

    2014-01-01

    The release of the latest LEGO Mindstorms EV3 robotics platform in September 2013 has provided a dilemma for many youth robotics leaders. There is a need to understand the differences in the Mindstorms NXT and EV3 in order to make future robotics purchases. In this article the differences are identified regarding software, hardware, sensors, the…

  7. PR-PR: Cross-Platform Laboratory Automation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linshiz, G; Stawski, N; Goyal, G

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Goldenmore » Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.« less

  8. PR-PR: cross-platform laboratory automation system.

    PubMed

    Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J

    2014-08-15

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  9. Automated platform for designing multiple robot work cells

    NASA Astrophysics Data System (ADS)

    Osman, N. S.; Rahman, M. A. A.; Rahman, A. A. Abdul; Kamsani, S. H.; Bali Mohamad, B. M.; Mohamad, E.; Zaini, Z. A.; Rahman, M. F. Ab; Mohamad Hatta, M. N. H.

    2017-06-01

    Designing the multiple robot work cells is very knowledge-intensive, intricate, and time-consuming process. This paper elaborates the development process of a computer-aided design program for generating the multiple robot work cells which offer a user-friendly interface. The primary purpose of this work is to provide a fast and easy platform for less cost and human involvement with minimum trial and errors adjustments. The automated platform is constructed based on the variant-shaped configuration concept with its mathematical model. A robot work cell layout, system components, and construction procedure of the automated platform are discussed in this paper where integration of these items will be able to automatically provide the optimum robot work cell design according to the information set by the user. This system is implemented on top of CATIA V5 software and utilises its Part Design, Assembly Design, and Macro tool. The current outcomes of this work provide a basis for future investigation in developing a flexible configuration system for the multiple robot work cells.

  10. Design and Evolution of a Modular Tensegrity Robot Platform

    NASA Technical Reports Server (NTRS)

    Bruce, Jonathan; Caluwaerts, Ken; Iscen, Atil; Sabelhaus, Andrew P.; SunSpiral, Vytas

    2014-01-01

    NASA Ames Research Center is developing a compliant modular tensegrity robotic platform for planetary exploration. In this paper we present the design and evolution of the platform's main hardware component, an untethered, robust tensegrity strut, with rich sensor feedback and cable actuation. Each strut is a complete robot, and multiple struts can be combined together to form a wide range of complex tensegrity robots. Our current goal for the tensegrity robotic platform is the development of SUPERball, a 6-strut icosahedron underactuated tensegrity robot aimed at dynamic locomotion for planetary exploration rovers and landers, but the aim is for the modular strut to enable a wide range of tensegrity morphologies. SUPERball is a second generation prototype, evolving from the tensegrity robot ReCTeR, which is also a modular, lightweight, highly compliant 6-strut tensegrity robot that was used to validate our physics based NASA Tensegrity Robot Toolkit (NTRT) simulator. Many hardware design parameters of the SUPERball were driven by locomotion results obtained in our validated simulator. These evolutionary explorations helped constrain motor torque and speed parameters, along with strut and string stress. As construction of the hardware has finalized, we have also used the same evolutionary framework to evolve controllers that respect the built hardware parameters.

  11. Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment

    PubMed Central

    Yamashita, Yuichi; Tani, Jun

    2008-01-01

    It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems. PMID:18989398

  12. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence

    PubMed Central

    Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine

    2012-01-01

    Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents. PMID:22586381

  13. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence.

    PubMed

    Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine

    2012-01-01

    Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents.

  14. Does transition from the da Vinci Si to Xi robotic platform impact single-docking technique for robot-assisted laparoscopic nephroureterectomy?

    PubMed

    Patel, Manish N; Aboumohamed, Ahmed; Hemal, Ashok

    2015-12-01

    To describe our robot-assisted nephroureterectomy (RNU) technique for benign indications and RNU with en bloc excision of bladder cuff (BCE) and lymphadenectomy (LND) for malignant indications using the da Vinci Si and da Vinci Xi robotic platform, with its pros and cons. The port placement described for Si can be used for standard and S robotic systems. This is the first report in the literature on the use of the da Vinci Xi robotic platform for RNU. After a substantial experience of RNU using different da Vinci robots from the standard to the Si platform in a single-docking fashion for benign and malignant conditions, we started using the newly released da Vinci Xi robot since 2014. The most important differences are in port placement and effective use of the features of da Vinci Xi robot while performing simultaneous upper and lower tract surgery. Patient positioning, port placement, step-by-step technique of single docking RNU-LND-BCE using the da Vinci Si and da Vinci Xi robot are shown in an accompanying video with the goal that centres using either robotic system benefit from the hints and tips. The first segment of video describes RNU-LND-BCE using the da Vinci Si followed by the da Vinci Xi to highlight differences. There was no need for patient repositioning or robot re-docking with the new da Vinci Xi robotic platform. We have experience of using different robotic systems for single docking RNU in 70 cases for benign (15) and malignant (55) conditions. The da Vinci Xi robotic platform helps operating room personnel in its easy movement, allows easier patient side-docking with the help of its boom feature, in addition to easy and swift movements of the robotic arms. The patient clearance feature can be used to avoid collision with the robotic arms or the patient's body. In patients with challenging body habitus and in situations where bladder cuff management is difficult, modifications can be made through reassigning the camera to a different port with utilisation of the retargeting feature of the da Vinci Xi when working on the bladder cuff or in the pelvis. The vision of the camera used for da Vinci Xi was initially felt to be inferior to that of the da Vinci Si; however, with a subsequent software upgrade this was much improved. The base of the da Vinci Xi is bigger, which does not slide and occasionally requires a change in table placement/operating room setup, and requires side-docking especially when dealing with very tall and obese patients for pelvic surgery. RNU alone or with LND-BCE is a challenging surgical procedure that addresses the upper and lower urinary tract simultaneously. Single docking and single robotic port placement for RNU-LND-BCE has evolved with the development of different generations of the robotic system. These procedures can be performed safely and effectively using the da Vinci S, Si or Xi robotic platform. The new da Vinci Xi robotic platform is more user-friendly, has easy installation, and is intuitive for surgeons using its features. © 2015 The Authors BJU International © 2015 BJU International Published by John Wiley & Sons Ltd.

  15. FLS tasks can be used as an ergonomic discriminator between laparoscopic and robotic surgery.

    PubMed

    Zihni, Ahmed M; Ohu, Ikechukwu; Cavallo, Jaime A; Ousley, Jenny; Cho, Sohyung; Awad, Michael M

    2014-08-01

    Robotic surgery may result in ergonomic benefits to surgeons. In this pilot study, we utilize surface electromyography (sEMG) to describe a method for identifying ergonomic differences between laparoscopic and robotic platforms using validated Fundamentals of Laparoscopic Surgery (FLS) tasks. We hypothesize that FLS task performance on laparoscopic and robotic surgical platforms will produce significant differences in mean muscle activation, as quantified by sEMG. Six right-hand-dominant subjects with varying experience performed FLS peg transfer (PT), pattern cutting (PC), and intracorporeal suturing (IS) tasks on laparoscopic and robotic platforms. sEMG measurements were obtained from each subject's bilateral bicep, tricep, deltoid, and trapezius muscles. EMG measurements were normalized to the maximum voluntary contraction (MVC) of each muscle of each subject. Subjects repeated each task three times per platform, and mean values used for pooled analysis. Average normalized muscle activation (%MVC) was calculated for each muscle group in all subjects for each FLS task. We compared mean %MVC values with paired t tests and considered differences with a p value less than 0.05 to be statistically significant. Mean activation of right bicep (2.7 %MVC lap, 1.3 %MVC robotic, p = 0.019) and right deltoid muscles (2.4 %MVC lap, 1.0 %MVC robotic, p = 0.019) were significantly elevated during the laparoscopic compared to the robotic IS task. The mean activation of the right trapezius muscle was significantly elevated during robotic compared to the laparoscopic PT (1.6 %MVC lap, 3.5 %MVC robotic, p = 0.040) and PC (1.3 %MVC lap, 3.6 %MVC robotic, p = 0.0018) tasks. FLS tasks are validated, readily available instruments that are feasible for use in demonstrating ergonomic differences between surgical platforms. In this study, we used FLS tasks to compare mean muscle activation of four muscle groups during laparoscopic and robotic task performance. FLS tasks can serve as the basis for larger studies to further describe ergonomic differences between laparoscopic and robotic surgery.

  16. Application of robotics in gastrointestinal endoscopy: A review

    PubMed Central

    Yeung, Baldwin Po Man; Chiu, Philip Wai Yan

    2016-01-01

    Multiple robotic flexible endoscope platforms have been developed based on cross specialty collaboration between engineers and medical doctors. However, significant number of these platforms have been developed for the natural orifice transluminal endoscopic surgery paradigm. Increasing amount of evidence suggest the focus of development should be placed on advanced endolumenal procedures such as endoscopic submucosal dissection instead. A thorough literature analysis was performed to assess the current status of robotic flexible endoscopic platforms designed for advanced endolumenal procedures. Current efforts are mainly focused on robotic locomotion and robotic instrument control. In the future, advances in actuation and servoing technology, optical analysis, augmented reality and wireless power transmission technology will no doubt further advance the field of robotic endoscopy. Globally, health systems have become increasingly budget conscious; widespread acceptance of robotic endoscopy will depend on careful design to ensure its delivery of a cost effective service. PMID:26855540

  17. Transparent actuators and robots based on single-layer superaligned carbon nanotube sheet and polymer composites.

    PubMed

    Chen, Luzhuo; Weng, Mingcen; Zhang, Wei; Zhou, Zhiwei; Zhou, Yi; Xia, Dan; Li, Jiaxin; Huang, Zhigao; Liu, Changhong; Fan, Shoushan

    2016-03-28

    Transparent actuators have been attracting emerging interest recently, as they demonstrate potential applications in the fields of invisible robots, tactical displays, variable-focus lenses, and flexible cellular phones. However, previous technologies did not simultaneously realize macroscopic transparent actuators with advantages of large-shape deformation, low-voltage-driven actuation and fast fabrication. Here, we develop a fast approach to fabricate a high-performance transparent actuator based on single-layer superaligned carbon nanotube sheet and polymer composites. Various advantages of single-layer nanotube sheets including high transparency, considerable conductivity, and ultra-thin dimensions together with selected polymer materials completely realize all the above required advantages. Also, this is the first time that a single-layer nanotube sheet has been used to fabricate actuators with high transparency, avoiding the structural damage to the single-layer nanotube sheet. The transparent actuator shows a transmittance of 72% at the wavelength of 550 nm and bends remarkably with a curvature of 0.41 cm(-1) under a DC voltage for 5 s, demonstrating a significant advance in technological performances compared to previous conventional actuators. To illustrate their great potential usage, a transparent wiper and a humanoid robot "hand" were elaborately designed and fabricated, which initiate a new direction in the development of high-performance invisible robotics and other intelligent applications with transparency.

  18. Constrained VPH+: a local path planning algorithm for a bio-inspired crawling robot with customized ultrasonic scanning sensor.

    PubMed

    Rao, Akshay; Elara, Mohan Rajesh; Elangovan, Karthikeyan

    This paper aims to develop a local path planning algorithm for a bio-inspired, reconfigurable crawling robot. A detailed description of the robotic platform is first provided, and the suitability for deployment of each of the current state-of-the-art local path planners is analyzed after an extensive literature review. The Enhanced Vector Polar Histogram algorithm is described and reformulated to better fit the requirements of the platform. The algorithm is deployed on the robotic platform in crawling configuration and favorably compared with other state-of-the-art local path planning algorithms.

  19. An integrated movement capture and control platform applied towards autonomous movements of surgical robots.

    PubMed

    Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D

    2009-01-01

    Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.

  20. Transparent actuators and robots based on single-layer superaligned carbon nanotube sheet and polymer composites

    NASA Astrophysics Data System (ADS)

    Chen, Luzhuo; Weng, Mingcen; Zhang, Wei; Zhou, Zhiwei; Zhou, Yi; Xia, Dan; Li, Jiaxin; Huang, Zhigao; Liu, Changhong; Fan, Shoushan

    2016-03-01

    Transparent actuators have been attracting emerging interest recently, as they demonstrate potential applications in the fields of invisible robots, tactical displays, variable-focus lenses, and flexible cellular phones. However, previous technologies did not simultaneously realize macroscopic transparent actuators with advantages of large-shape deformation, low-voltage-driven actuation and fast fabrication. Here, we develop a fast approach to fabricate a high-performance transparent actuator based on single-layer superaligned carbon nanotube sheet and polymer composites. Various advantages of single-layer nanotube sheets including high transparency, considerable conductivity, and ultra-thin dimensions together with selected polymer materials completely realize all the above required advantages. Also, this is the first time that a single-layer nanotube sheet has been used to fabricate actuators with high transparency, avoiding the structural damage to the single-layer nanotube sheet. The transparent actuator shows a transmittance of 72% at the wavelength of 550 nm and bends remarkably with a curvature of 0.41 cm-1 under a DC voltage for 5 s, demonstrating a significant advance in technological performances compared to previous conventional actuators. To illustrate their great potential usage, a transparent wiper and a humanoid robot ``hand'' were elaborately designed and fabricated, which initiate a new direction in the development of high-performance invisible robotics and other intelligent applications with transparency.Transparent actuators have been attracting emerging interest recently, as they demonstrate potential applications in the fields of invisible robots, tactical displays, variable-focus lenses, and flexible cellular phones. However, previous technologies did not simultaneously realize macroscopic transparent actuators with advantages of large-shape deformation, low-voltage-driven actuation and fast fabrication. Here, we develop a fast approach to fabricate a high-performance transparent actuator based on single-layer superaligned carbon nanotube sheet and polymer composites. Various advantages of single-layer nanotube sheets including high transparency, considerable conductivity, and ultra-thin dimensions together with selected polymer materials completely realize all the above required advantages. Also, this is the first time that a single-layer nanotube sheet has been used to fabricate actuators with high transparency, avoiding the structural damage to the single-layer nanotube sheet. The transparent actuator shows a transmittance of 72% at the wavelength of 550 nm and bends remarkably with a curvature of 0.41 cm-1 under a DC voltage for 5 s, demonstrating a significant advance in technological performances compared to previous conventional actuators. To illustrate their great potential usage, a transparent wiper and a humanoid robot ``hand'' were elaborately designed and fabricated, which initiate a new direction in the development of high-performance invisible robotics and other intelligent applications with transparency. Electronic supplementary information (ESI) available: Video records of the actuation process of the transparent wiper and the grabbing-releasing process of the transparent robot ``hand'', transmittance spectra of the PET and BOPP films, the SEM image showing the thickness of the SACNT sheet, calculation of the curvature, calculation of energy efficiency, experimental results of the control experiment, modeling of the SACNT/PET and PET/BOPP composites and experimental results of the repeatability test. See DOI: 10.1039/c5nr07237a

  1. BeBot: A Modular Mobile Miniature Robot Platform Supporting Hardware Reconfiguration and Multi-standard Communication

    NASA Astrophysics Data System (ADS)

    Herbrechtsmeier, Stefan; Witkowski, Ulf; Rückert, Ulrich

    Mobile robots become more and more important in current research and education. Especially small ’on the table’ experiments attract interest, because they need no additional or special laboratory equipments. In this context platforms are desirable which are small, simple to access and relatively easy to program. An additional powerful information processing unit is advantageous to simplify the implementation of algorithm and the porting of software from desktop computers to the robot platform. In this paper we present a new versatile miniature robot that can be ideally used for research and education. The small size of the robot of about 9 cm edge length, its robust drive and its modular structure make the robot a general device for single and multi-robot experiments executed ’on the table’. For programming and evaluation the robot can be wirelessly connected via Bluetooth or WiFi. The operating system of the robot is based on the standard Linux kernel and the GNU C standard library. A player/stage model eases software development and testing.

  2. Simulation of cooperating robot manipulators on a mobile platform

    NASA Technical Reports Server (NTRS)

    Murphy, Stephen H.; Wen, John Ting-Yung; Saridis, George N.

    1991-01-01

    The dynamic equations of motion are presented for two or more cooperating manipulators on a freely moving mobile platform. The system of cooperating robot manipulators forms a closed kinematic chain where the force of interaction must be included in the formulation of robot and platform dynamics. The formulation includes the full dynamic interactions from arms to platform and arm tip to arm tip, and the possible translation and rotation of the platform. The equations of motion are shown to be identical in structure to the fixed-platform cooperative manipulator dynamics. The number of DOFs of the system is sufficiently large to make recursive dynamic calculation methods potentially more efficient than closed-form solutions. A complete simulation with two 6-DOF manipulators of a free-floating platform is presented along a with a multiple-arm controller to position the common load.

  3. Rapid Prototyping Platform for Robotics Applications

    ERIC Educational Resources Information Center

    Hwang, Kao-Shing; Hsiao, Wen-Hsu; Shing, Gaung-Ting; Chen, Kim-Joan

    2011-01-01

    For the past several years, a team in the Department of Electrical Engineering (EE), National Chung Cheng University, Taiwan, has been establishing a pedagogical approach to embody embedded systems in the context of robotics. To alleviate the burden on students in the robotics curriculum in their junior and senior years, a training platform on…

  4. 3D printed rapid disaster response

    NASA Astrophysics Data System (ADS)

    Lacaze, Alberto; Murphy, Karl; Mottern, Edward; Corley, Katrina; Chu, Kai-Dee

    2014-05-01

    Under the Department of Homeland Security-sponsored Sensor-smart Affordable Autonomous Robotic Platforms (SAARP) project, Robotic Research, LLC is developing an affordable and adaptable method to provide disaster response robots developed with 3D printer technology. The SAARP Store contains a library of robots, a developer storefront, and a user storefront. The SAARP Store allows the user to select, print, assemble, and operate the robot. In addition to the SAARP Store, two platforms are currently being developed. They use a set of common non-printed components that will allow the later design of other platforms that share non-printed components. During disasters, new challenges are faced that require customized tools or platforms. Instead of prebuilt and prepositioned supplies, a library of validated robots will be catalogued to satisfy various challenges at the scene. 3D printing components will allow these customized tools to be deployed in a fraction of the time that would normally be required. While the current system is focused on supporting disaster response personnel, this system will be expandable to a range of customers, including domestic law enforcement, the armed services, universities, and research facilities.

  5. Uncanny valley: A preliminary study on the acceptance of Malaysian urban and rural population toward different types of robotic faces

    NASA Astrophysics Data System (ADS)

    Tay, T. T.; Low, Raymond; Loke, H. J.; Chua, Y. L.; Goh, Y. H.

    2018-04-01

    The proliferation of robotic technologies in recent years brings robots closer to humanities. There are many researches on going at various stages of development to bring robots into our homes, schools, nurseries, elderly care centres, offices, hospitals and factories. With recently developed robots having tendency to have appearance which increasingly displaying similarities to household animals and humans, there is a need to study the existence of uncanny valley phenomenon. Generally, the acceptance of people toward robots increases as the robots acquire increasing similarities to human features until a stage where people feel very uncomfortable, eerie, fear and disgust when the robot appearance become almost human like but not yet human. This phenomenon called uncanny valley was first reported by Masahiro Mori. There are numerous researches conducted to measure the existence of uncanny valley in Japan and European countries. However, there is limited research reported on uncanny valley phenomenon in Malaysia so far. In view of the different cultural background and exposure of Malaysian population to robotics technology compared to European or East Asian populations, it is worth to study this phenomenon in Malaysian context. The main aim of this work is to conduct a preliminary study to determine the existence of uncanny valley phenomenon in Malaysian urban and rural populations. It is interesting to find if there are any differences in the acceptance of the two set of populations despite of their differences. Among others the urban and rural populations differ in term of the rate of urbanization and exposure to latest technologies. A set of four interactive robotic faces and an ideal human model representing the fifth robot are used in this study. The robots have features resembling a cute animal, cartoon character, typical robot and human-like. Questionnaire surveys are conducted on respondents from urban and rural populations. Survey data collected are analysed to determine the preferred features in a humanoid robot, the acceptance of respondents toward the robotic faces and the existence of uncanny valley phenomenon. Based on the limited study, it is found that the uncanny valley phenomenon existed in both the Malaysian urban and rural population.

  6. I want what you've got: Cross platform portabiity and human-robot interaction assessment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer

    2005-08-01

    Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less

  7. Development of microsized slip sensors using dielectric elastomer for incipient slippage

    NASA Astrophysics Data System (ADS)

    Hwang, Do-Yeon; Kim, Baek-chul; Cho, Han-Jeong; Li, Zhengyuan; Lee, Youngkwan; Nam, Jae-Do; Moon, Hyungpil; Choi, Hyouk Ryeol; Koo, J. C.

    2014-04-01

    A humanoid robot hand has received significant attention in various fields of study. In terms of dexterous robot hand, slip detecting tactile sensor is essential to grasping objects safely. Moreover, slip sensor is useful in robotics and prosthetics to improve precise control during manipulation tasks. In this paper, sensor based-human biomimetic structure is fabricated. We reported a resistance tactile sensor that enables to detect a slip on the surface of sensor structure. The resistance slip sensor that the novel developed uses acrylonitrile-butadiene rubber (NBR) as a dielectric substrate and carbon particle as an electrode material. The presented sensor device in this paper has fingerprint-like structures that are similar with the role of the human's finger print. It is possible to measure the slip as the structure of sensor makes a deformation and it changes the resistance through forming a new conductive route. To verify effectiveness of the proposed slip detection, experiment using prototype of resistance slip sensor is conducted with an algorithm to detect slip and slip was successfully detected. In this paper, we will discuss the slip detection properties so four sensor and detection principle.

  8. Information driven self-organization of complex robotic behaviors.

    PubMed

    Martius, Georg; Der, Ralf; Ay, Nihat

    2013-01-01

    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.

  9. MonoSLAM: real-time single camera SLAM.

    PubMed

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  10. KC-135 materials handling robotics

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.

    1991-01-01

    Robot dynamics and control will become an important issue for implementing productive platforms in space. Robotic operations will become necessary for man-tended stations and for efficient performance of routine operations in a manned platform. The current constraints on the use of robotic devices in a microgravity environment appears to be due to an anticipated increase in acceleration levels due to manipulator motion and for safety concerns. The objective of this study will be to provide baseline data to meet that need. Most texts and papers dealing with the kinematics and dynamics of robots assume that the manipulator is composed of joints separated by rigid links. However, in recent years several groups have begun to study the dynamics of flexible manipulators, primarily for applying robots in space and for improving the efficiency and precision of robotic systems. Robotic systems which are being planned for implementation in space have a number of constraints to overcome. Additional concepts which have to be worked out in any robotic implementation for a space platform include teleoperation and degree of autonomous control. Some significant results in developing a robotic workcell for performing robotics research on the KC-135 aircraft in preperation for space-based robotics applications in the future were generated. In addition, it was shown that TREETOPS can be used to simulate the dynamics of robot manipulators for both space and ground-based applications.

  11. Could robots become authentic companions in nursing care?

    PubMed

    Metzler, Theodore A; Lewis, Lundy M; Pope, Linda C

    2016-01-01

    Creating android and humanoid robots to furnish companionship in the nursing care of older people continues to attract substantial development capital and research. Some people object, though, that machines of this kind furnish human-robot interaction characterized by inauthentic relationships. In particular, robotic and artificial intelligence (AI) technologies have been charged with substituting mindless mimicry of human behaviour for the real presence of conscious caring offered by human nurses. When thus viewed as deceptive, the robots also have prompted corresponding concerns regarding their potential psychological, moral, and spiritual implications for people who will be interacting socially with these machines. The foregoing objections and concerns can be assessed quite differently, depending upon ambient religious beliefs or metaphysical presuppositions. The complaints may be set aside as unnecessary, for example, within religious traditions for which even current robots can be viewed as presenting spiritual aspects. Elsewhere, technological cultures may reject the complaints as expression of outdated superstition, holding that the machines eventually will enjoy a consciousness described entirely in materialist and behaviourist terms. While recognizing such assessments, the authors of this essay propose that the heart of the foregoing objections and concerns may be evaluated, in part, scientifically - albeit with a conclusion recommending fundamental revisions in AI modelling of human mental life. Specifically, considerations now favour introduction of AI models using interactive classical and quantum computation. Without this change, the answer to the essay's title question arguably is 'no' - with it, the answer plausibly becomes 'maybe'. Either outcome holds very interesting implications for nurses. © 2015 John Wiley & Sons Ltd.

  12. Technological advances in robotic-assisted laparoscopic surgery.

    PubMed

    Tan, Gerald Y; Goel, Raj K; Kaouk, Jihad H; Tewari, Ashutosh K

    2009-05-01

    In this article, the authors describe the evolution of urologic robotic systems and the current state-of-the-art features and existing limitations of the da Vinci S HD System (Intuitive Surgical, Inc.). They then review promising innovations in scaling down the footprint of robotic platforms, the early experience with mobile miniaturized in vivo robots, advances in endoscopic navigation systems using augmented reality technologies and tracking devices, the emergence of technologies for robotic natural orifice transluminal endoscopic surgery and single-port surgery, advances in flexible robotics and haptics, the development of new virtual reality simulator training platforms compatible with the existing da Vinci system, and recent experiences with remote robotic surgery and telestration.

  13. Task decomposition for a multilimbed robot to work in reachable but unorientable space

    NASA Technical Reports Server (NTRS)

    Su, Chau; Zheng, Yuan F.

    1991-01-01

    Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.

  14. Infant discrimination of humanoid robots

    PubMed Central

    Matsuda, Goh; Ishiguro, Hiroshi; Hiraki, Kazuo

    2015-01-01

    Recently, extremely humanlike robots called “androids” have been developed, some of which are already being used in the field of entertainment. In the context of psychological studies, androids are expected to be used in the future as fully controllable human stimuli to investigate human nature. In this study, we used an android to examine infant discrimination ability between human beings and non-human agents. Participants (N = 42 infants) were assigned to three groups based on their age, i.e., 6- to 8-month-olds, 9- to 11-month-olds, and 12- to 14-month-olds, and took part in a preferential looking paradigm. Of three types of agents involved in the paradigm—a human, an android modeled on the human, and a mechanical-looking robot made from the android—two at a time were presented side-by-side as they performed a grasping action. Infants’ looking behavior was measured using an eye tracking system, and the amount of time spent focusing on each of three areas of interest (face, goal, and body) was analyzed. Results showed that all age groups predominantly looked at the robot and at the face area, and that infants aged over 9 months watched the goal area for longer than the body area. There was no difference in looking times and areas focused on between the human and the android. These findings suggest that 6- to 14-month-olds are unable to discriminate between the human and the android, although they can distinguish the mechanical robot from the human. PMID:26441772

  15. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    PubMed Central

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  16. Babybot: a biologically inspired developing robotic agent

    NASA Astrophysics Data System (ADS)

    Metta, Giorgio; Panerai, Francesco M.; Sandini, Giulio

    2000-10-01

    The study of development, either artificial or biological, can highlight the mechanisms underlying learning and adaptive behavior. We shall argue whether developmental studies might provide a different and potentially interesting perspective either on how to build an artificial adaptive agent, or on understanding how the brain solves sensory, motor, and cognitive tasks. It is our opinion that the acquisition of the proper behavior might indeed be facilitated because within an ecological context, the agent, its adaptive structure and the environment dynamically interact thus constraining the otherwise difficult learning problem. In very general terms we shall describe the proposed approach and supporting biological related facts. In order to further analyze these aspects from the modeling point of view, we shall demonstrate how a twelve degrees of freedom baby humanoid robot acquires orienting and reaching behaviors, and what advantages the proposed framework might offer. In particular, the experimental setup consists of five degrees-of-freedom (dof) robot head, and an off-the-shelf six dof robot manipulator, both mounted on a rotating base: i.e. the torso. From the sensory point of view, the robot is equipped with two space-variant cameras, an inertial sensor simulating the vestibular system, and proprioceptive information through motor encoders. The biological parallel is exploited at many implementation levels. It is worth mentioning, for example, the space- variant eyes, exploiting foveal and peripheral vision in a single arrangement, the inertial sensor providing efficient image stabilization (vestibulo-ocular reflex).

  17. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  18. Autonomous robotic platforms for locating radio sources buried under rubble

    NASA Astrophysics Data System (ADS)

    Tasu, A. S.; Anchidin, L.; Tamas, R.; Paun, M.; Danisor, A.; Petrescu, T.

    2016-12-01

    This paper deals with the use of autonomous robotic platforms able to locate radio signal sources such as mobile phones, buried under collapsed buildings as a result of earthquakes, natural disasters, terrorism, war, etc. This technique relies on averaging position data resulting from a propagation model implemented on the platform and the data acquired by robotic platforms at the disaster site. That allows us to calculate the approximate position of radio sources buried under the rubble. Based on measurements, a radio map of the disaster site is made, very useful for locating victims and for guiding specific rubble lifting machinery, by assuming that there is a victim next to a mobile device detected by the robotic platform; by knowing the approximate position, the lifting machinery does not risk to further hurt the victims. Moreover, by knowing the positions of the victims, the reaction time is decreased, and the chances of survival for the victims buried under the rubble, are obviously increased.

  19. Meeting the challenges--the role of medical informatics in an ageing society.

    PubMed

    Koch, Sabine

    2006-01-01

    The objective of this paper is to identify trends and new technological developments that appear due to an ageing society and to relate them to current research in the field of medical informatics. A survey of the current literature reveals that recent technological advances have been made in the fields of "telecare and home-monitoring", "smart homes and robotics" and "health information systems and knowledge management". Innovative technologies such as wearable devices, bio- and environmental sensors and mobile, humanoid robots do already exist and ambient assistant living environments are being created for an ageing society. However, those technologies have to be adapted to older people's self-care processes and coping strategies, and to support new ways of healthcare delivery. Medical informatics can support this process by providing the necessary information infrastructure, contribute to standardisation, interoperability and security issues and provide modelling and simulation techniques for educational purposes. Research fields of increasing importance with regard to an ageing society are, moreover, the fields of knowledge management, ubiquitous computing and human-computer interaction.

  20. Dynamics and control of cable-suspended parallel robots for giant telescopes

    NASA Astrophysics Data System (ADS)

    Zhuang, Peng; Yao, Zhengqiu

    2006-06-01

    A cable-suspended parallel robot utilizes the basic idea of Stewart platform but replaces parallel links with cables and linear actuators with winches. It has many advantages over a conventional crane. The concept of applying a cable-suspended parallel robot into the construction and maintenance of giant telescope is presented in this paper. Compared with the mass and travel of the moving platform of the robot, the mass and deformation of the cables can be disregarded. Based on the premises, the kinematic and dynamic models of the robot are built. Through simulation, the inertia and gravity of moving platform are found to have dominant effect on the dynamic characteristic of the robot, while the dynamics of actuators can be disregarded, so a simplified dynamic model applicable to real-time control is obtained. Moreover, according to control-law partitioning approach and optimization theory, a workspace model-based controller is proposed considering the characteristic that the cables can only pull but not push. The simulation results indicate that the controller possesses good accuracy in pose and speed tracking, and keeps the cables in reliable tension by maintaining the minimum strain above a certain given value, thus ensures smooth motion and accurate localization for moving platform.

  1. Urban search mobile platform modeling in hindered access conditions

    NASA Astrophysics Data System (ADS)

    Barankova, I. I.; Mikhailova, U. V.; Kalugina, O. B.; Barankov, V. V.

    2018-05-01

    The article explores the control system simulation and the design of the experimental model of the rescue robot mobile platform. The functional interface, a structural functional diagram of the mobile platform control unit, and a functional control scheme for the mobile platform of secure robot were modeled. The task of design a mobile platform for urban searching in hindered access conditions is realized through the use of a mechanical basis with a chassis and crawler drive, a warning device, human heat sensors and a microcontroller based on Arduino platforms.

  2. BILL-E: Robotic Platform for Locomotion and Manipulation of Lightweight Space Structures

    NASA Technical Reports Server (NTRS)

    Jenett, Benjamin; Cheung, Kenneth

    2017-01-01

    We describe a robotic platform for traversing and manipulating a modular 3D lattice structure. The robot is designed to operate within a specifically structured environment, which enables low numbers of degrees of freedom (DOF) compared to robots performing comparable tasks in an unstructured environment. This allows for simple controls, as well as low mass and cost. This approach, designing the robot relative to the local environment in which it operates, results in a type of robot we call a "relative robot." We describe a bipedal robot that can locomote across a periodic lattice structure, as well as being able to handle, manipulate, and transport building block parts that compose the lattice structure. Based on a general inchworm design, the robot has added functionality for traveling over and operating on a host structure.

  3. Robotic follow system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Anderson, Matthew O [Idaho Falls, ID

    2007-05-01

    Robot platforms, methods, and computer media are disclosed. The robot platform includes perceptors, locomotors, and a system controller, which executes instructions for a robot to follow a target in its environment. The method includes receiving a target bearing and sensing whether the robot is blocked front. If the robot is blocked in front, then the robot's motion is adjusted to avoid the nearest obstacle in front. If the robot is not blocked in front, then the method senses whether the robot is blocked toward the target bearing and if so, sets the rotational direction opposite from the target bearing, and adjusts the rotational velocity and translational velocity. If the robot is not blocked toward the target bearing, then the rotational velocity is adjusted proportional to an angle of the target bearing and the translational velocity is adjusted proportional to a distance to the nearest obstacle in front.

  4. Introduction to autonomous mobile robotics using Lego Mindstorms NXT

    NASA Astrophysics Data System (ADS)

    Akın, H. Levent; Meriçli, Çetin; Meriçli, Tekin

    2013-12-01

    Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the Lego Mindstorms NXT kits are used as the robot platform. The aims, scope and contents of the course are presented, and the design of the laboratory sessions as well as the term projects, which address several core problems of robotics and artificial intelligence simultaneously, are explained in detail.

  5. Controlling multiple security robots in a warehouse environment

    NASA Technical Reports Server (NTRS)

    Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.

    1994-01-01

    The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.

  6. The ITALK project: a developmental robotics approach to the study of individual, social, and linguistic learning.

    PubMed

    Broz, Frank; Nehaniv, Chrystopher L; Belpaeme, Tony; Bisio, Ambra; Dautenhahn, Kerstin; Fadiga, Luciano; Ferrauto, Tomassino; Fischer, Kerstin; Förster, Frank; Gigliotta, Onofrio; Griffiths, Sascha; Lehmann, Hagen; Lohan, Katrin S; Lyon, Caroline; Marocco, Davide; Massera, Gianluca; Metta, Giorgio; Mohan, Vishwanathan; Morse, Anthony; Nolfi, Stefano; Nori, Francesco; Peniak, Martin; Pitsch, Karola; Rohlfing, Katharina J; Sagerer, Gerhard; Sato, Yo; Saunders, Joe; Schillingmann, Lars; Sciutti, Alessandra; Tikhanoff, Vadim; Wrede, Britta; Zeschel, Arne; Cangelosi, Angelo

    2014-07-01

    This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots. Copyright © 2014 Cognitive Science Society, Inc.

  7. Astrobee: A New Platform for Free-Flying Robotics on the International Space Station

    NASA Technical Reports Server (NTRS)

    Smith, Trey; Barlow, Jonathan; Bualat, Maria; Fong, Terrence; Provencher, Christopher; Sanchez, Hugo; Smith, Ernest

    2016-01-01

    The Astrobees are next-generation free-flying robots that will operate in the interior of the International Space Station (ISS). Their primary purpose is to provide a flexible platform for research on zero-g freeflying robotics, with the ability to carry a wide variety of future research payloads and guest science software. They will also serve utility functions: as free-flying cameras to record video of astronaut activities, and as mobile sensor platforms to conduct surveys of the ISS. The Astrobee system includes two robots, a docking station, and a ground data system (GDS). It is developed by the Human Exploration Telerobotics 2 (HET-2) Project, which began in Oct. 2014, and will deliver the Astrobees for launch to ISS in 2017. This paper covers selected aspects of the Astrobee design, focusing on capabilities relevant to potential users of the platform.

  8. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    PubMed Central

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  9. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications.

    PubMed

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Nonomura, Yutaka; Muroyama, Masanori

    2017-08-28

    Robot tactile sensation can enhance human-robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as "sensor platform LSI") as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated.

  10. A modular wireless in vivo surgical robot with multiple surgical applications.

    PubMed

    Hawks, Jeff A; Rentschler, Mark E; Farritor, Shane; Oleynikov, Dmitry; Platt, Stephen R

    2009-01-01

    The use of miniature in vivo robots that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Previous work demonstrates that both mobile and fixed-based robots can successfully operate inside the abdominal cavity. A modular wireless mobile platform has also been developed to provide surgical vision and task assistance. This paper presents an overview of recent test results of several possible surgical applications that can be accommodated by this modular platform. Applications such as a biopsy grasper, stapler and clamp, video camera, and physiological sensors have been integrated into the wireless platform and tested in vivo in a porcine model. The modular platform facilitates rapid development and conversion from one type of surgical task assistance to another. These self-contained surgical devices are much more transportable and much lower in cost than current robotic surgical assistants. These devices could ultimately be carried and deployed by non-medical personnel at the site of an injury. A remotely located surgeon could use these robots to provide critical first response medical intervention.

  11. Muscle Motion Solenoid Actuator

    NASA Astrophysics Data System (ADS)

    Obata, Shuji

    It is one of our dreams to mechanically recover the lost body for damaged humans. Realistic humanoid robots composed of such machines require muscle motion actuators controlled by all pulling actions. Particularly, antagonistic pairs of bi-articular muscles are very important in animal's motions. A system of actuators is proposed using the electromagnetic force of the solenoids with the abilities of the stroke length over 10 cm and the strength about 20 N, which are needed to move the real human arm. The devised actuators are based on developments of recent modern electro-magnetic materials, where old time materials can not give such possibility. Composite actuators are controlled by a high ability computer and software making genuine motions.

  12. Robotic kidney autotransplantation in a porcine model: a procedure-specific training platform for the simulation of robotic intracorporeal vascular anastomosis.

    PubMed

    Tiong, Ho Yee; Goh, Benjamin Yen Seow; Chiong, Edmund; Tan, Lincoln Guan Lim; Vathsala, Anatharaman

    2018-03-31

    Robotic-assisted kidney transplantation (RKT) with the Da Vinci (Intuitive, USA) platform has been recently developed to improve outcomes by decreasing surgical site complications and morbidity, especially in obese patients. This potential paradigm shift in the surgical technique of kidney transplantation is performed in only a few centers. For wider adoption of this high stake complex operation, we aimed to develop a procedure-specific simulation platform in a porcine model for the training of robotic intracorporeal vascular anastomosis and evaluating vascular anastomoses patency. This paper describes the requirements and steps developed for the above training purpose. Over a series of four animal ethics' approved experiments, the technique of robotic-assisted laparoscopic autotransplantation of the kidney was developed in Amsterdam live pigs (60-70 kg). The surgery was based around the vascular anastomosis technique described by Menon et al. This non-survival porcine training model is targeted at transplant surgeons with robotic surgery experience. Under general anesthesia, each pig was placed in lateral decubitus position with the placement of one robotic camera port, two robotic 8 mm ports and one assistant port. Robotic docking over the pig posteriorly was performed. The training platform involved the following procedural steps. First, ipsilateral iliac vessel dissection was performed. Second, robotic-assisted laparoscopic donor nephrectomy was performed with in situ perfusion of the kidney with cold Hartmann's solution prior to complete division of the hilar vessels, ureter and kidney mobilization. Thirdly, the kidney was either kept in situ for orthotopic autotransplantation or mobilized to the pelvis and orientated for the vascular anastomosis, which was performed end to end or end to side after vessel loop clamping of the iliac vessels, respectively, using 6/0 Gore-Tex sutures. Following autotransplantation and release of vessel loops, perfusion of the graft was assessed using intraoperative indocyanine green imaging and monitoring urine output after unclamping. This training platform demonstrates adequate face and content validity. With practice, arterial anastomotic time could be improved, showing its construct validity. This porcine training model can be useful in providing training for robotic intracorporeal vascular anastomosis and may facilitate confident translation into a transplant human recipient.

  13. Conceptual design and kinematic analysis of a novel parallel robot for high-speed pick-and-place operations

    NASA Astrophysics Data System (ADS)

    Meng, Qizhi; Xie, Fugui; Liu, Xin-Jun

    2018-06-01

    This paper deals with the conceptual design, kinematic analysis and workspace identification of a novel four degrees-of-freedom (DOFs) high-speed spatial parallel robot for pick-and-place operations. The proposed spatial parallel robot consists of a base, four arms and a 1½ mobile platform. The mobile platform is a major innovation that avoids output singularity and offers the advantages of both single and double platforms. To investigate the characteristics of the robot's DOFs, a line graph method based on Grassmann line geometry is adopted in mobility analysis. In addition, the inverse kinematics is derived, and the constraint conditions to identify the correct solution are also provided. On the basis of the proposed concept, the workspace of the robot is identified using a set of presupposed parameters by taking input and output transmission index as the performance evaluation criteria.

  14. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  15. The evolution of robotic general surgery.

    PubMed

    Wilson, E B

    2009-01-01

    Surgical robotics in general surgery has a relatively short but very interesting evolution. Just as minimally invasive and laparoscopic techniques have radically changed general surgery and fractionated it into subspecialization, robotic technology is likely to repeat the process of fractionation even further. Though it appears that robotics is growing more quickly in other specialties, the changes digital platforms are causing in the general surgical arena are likely to permanently alter general surgery. This review examines the evolution of robotics in minimally invasive general surgery looking forward to a time where robotics platforms will be fundamental to elective general surgery. Learning curves and adoption techniques are explored. Foregut, hepatobiliary, endocrine, colorectal, and bariatric surgery will be examined as growth areas for robotics, as well as revealing the current uses of this technology.

  16. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  17. Towards Optimal Platform-Based Robot Design for Ankle Rehabilitation: The State of the Art and Future Prospects.

    PubMed

    Miao, Qing; Zhang, Mingming; Wang, Congzhe; Li, Hongsheng

    2018-01-01

    This review aims to compare existing robot-assisted ankle rehabilitation techniques in terms of robot design. Included studies mainly consist of selected papers in two published reviews involving a variety of robot-assisted ankle rehabilitation techniques. A free search was also made in Google Scholar and Scopus by using keywords "ankle ∗ ," and "robot ∗ ," and ("rehabilitat ∗ " or "treat ∗ "). The search is limited to English-language articles published between January 1980 and September 2016. Results show that existing robot-assisted ankle rehabilitation techniques can be classified into wearable exoskeleton and platform-based devices. Platform-based devices are mostly developed for the treatment of a variety of ankle musculoskeletal and neurological injuries, while wearable ones focus more on ankle-related gait training. In terms of robot design, comparative analysis indicates that an ideal ankle rehabilitation robot should have aligned rotation center as the ankle joint, appropriate workspace, and actuation torque, no matter how many degrees of freedom (DOFs) it has. Single-DOF ankle robots are mostly developed for specific applications, while multi-DOF devices are more suitable for comprehensive ankle rehabilitation exercises. Other factors including posture adjustability and sensing functions should also be considered to promote related clinical applications. An ankle rehabilitation robot with reconfigurability to maximize its functions will be a new research point towards optimal design, especially on parallel mechanisms.

  18. Automatic Modeling and Simulation of Modular Robots

    NASA Astrophysics Data System (ADS)

    Jiang, C.; Wei, H.; Zhang, Y.

    2018-03-01

    The ability of reconfiguration makes modular robots have the ability of adaptable, low-cost, self-healing and fault-tolerant. It can also be applied to a variety of mission situations. In this manuscript, a robot platform which relied on the module library was designed, based on the screw theory and module theory. Then, the configuration design method of the modular robot was proposed. And the different configurations of modular robot system have been built, including industrial mechanical arms, the mobile platform, six-legged robot and 3D exoskeleton manipulator. Finally, the simulation and verification of one system among them have been made, using the analyses of screw kinematics and polynomial planning. The results of experiments demonstrate the feasibility and superiority of this modular system.

  19. Robotic-assisted Heller myotomy: a modern technique and review of outcomes.

    PubMed

    Afaneh, Cheguevara; Finnerty, Brendan; Abelson, Jonathan S; Zarnegar, Rasa

    2015-06-01

    Achalasia is a debilitating esophageal motility disorder characterized by incomplete relaxation of the lower esophageal sphincter and lack of peristalsis. Manometry is the gold standard for diagnosis and laparoscopic Heller myotomy has been the gold standard for definitive therapy. However, current advances in surgical technology have introduced the robotic platform as a viable approach for this procedure. The safety and efficacy has been clearly established with comparable operative times to laparoscopy in experienced hands. Importantly, the rate of resolution of dysphagia postoperatively is over 80% which is comparable to laparoscopic outcomes. Moreover, some literature suggests lower esophageal perforation rates utilizing the robotic platform. Nevertheless, costs remain one of the largest barriers to widespread use of the robotic platform and future studies should aim to identify strategies in cost reduction.

  20. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  1. An Innovative 6-DOF Platform for Testing a Space Robotic System to Perform Contact Tasks in Zero-Gravity Environment

    DTIC Science & Technology

    2013-10-21

    Platform for Testing a Space Robotic System to Perform Contact Tasks in Zero- Gravity Environment 5a. CONTRACT NUMBER FA9453-11-1-0306 5b...SUBJECT TERMS Microgravity, zero gravity , test platform, simulation, gravity offloading 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...4  3.3  Principle of Gravity Offloading

  2. Automation Improvements for Synchrotron Based Small Angle Scattering Using an Inexpensive Robotics Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, John P.

    This paper reports on the progress toward creating semi-autonomous motion control platforms for beamline applications using the iRobot Create registered platform. The goal is to create beamline research instrumentation where the motion paths are based on the local environment rather than position commanded from a control system, have low integration costs and also be scalable and easily maintainable.

  3. Emotional Expression in Simple Line Drawings of a Robot's Face Leads to Higher Offers in the Ultimatum Game.

    PubMed

    Terada, Kazunori; Takeuchi, Chikara

    2017-01-01

    In the present study, we investigated whether expressing emotional states using a simple line drawing to represent a robot's face can serve to elicit altruistic behavior from humans. An experimental investigation was conducted in which human participants interacted with a humanoid robot whose facial expression was shown on an LCD monitor that was mounted as its head (Study 1). Participants were asked to play the ultimatum game, which is usually used to measure human altruistic behavior. All participants were assigned to be the proposer and were instructed to decide their offer within 1 min by controlling a slider bar. The corners of the robot's mouth, as indicated by the line drawing, simply moved upward, or downward depending on the position of the slider bar. The results suggest that the change in the facial expression depicted by a simple line drawing of a face significantly affected the participant's final offer in the ultimatum game. The offers were increased by 13% when subjects were shown contingent changes of facial expression. The results were compared with an experiment in a teleoperation setting in which participants interacted with another person through a computer display showing the same line drawings used in Study 1 (Study 2). The results showed that offers were 15% higher if participants were shown a contingent facial expression change. Together, Studies 1 and 2 indicate that emotional expression in simple line drawings of a robot's face elicits the same higher offer from humans as a human telepresence does.

  4. Emotional Expression in Simple Line Drawings of a Robot's Face Leads to Higher Offers in the Ultimatum Game

    PubMed Central

    Terada, Kazunori; Takeuchi, Chikara

    2017-01-01

    In the present study, we investigated whether expressing emotional states using a simple line drawing to represent a robot's face can serve to elicit altruistic behavior from humans. An experimental investigation was conducted in which human participants interacted with a humanoid robot whose facial expression was shown on an LCD monitor that was mounted as its head (Study 1). Participants were asked to play the ultimatum game, which is usually used to measure human altruistic behavior. All participants were assigned to be the proposer and were instructed to decide their offer within 1 min by controlling a slider bar. The corners of the robot's mouth, as indicated by the line drawing, simply moved upward, or downward depending on the position of the slider bar. The results suggest that the change in the facial expression depicted by a simple line drawing of a face significantly affected the participant's final offer in the ultimatum game. The offers were increased by 13% when subjects were shown contingent changes of facial expression. The results were compared with an experiment in a teleoperation setting in which participants interacted with another person through a computer display showing the same line drawings used in Study 1 (Study 2). The results showed that offers were 15% higher if participants were shown a contingent facial expression change. Together, Studies 1 and 2 indicate that emotional expression in simple line drawings of a robot's face elicits the same higher offer from humans as a human telepresence does. PMID:28588520

  5. ARTIE: An Integrated Environment for the Development of Affective Robot Tutors

    PubMed Central

    Imbernón Cuadrado, Luis-Eduardo; Manjarrés Riesco, Ángeles; De La Paz López, Félix

    2016-01-01

    Over the last decade robotics has attracted a great deal of interest from teachers and researchers as a valuable educational tool from preschool to highschool levels. The implementation of social-support behaviors in robot tutors, in particular in the emotional dimension, can make a significant contribution to learning efficiency. With the aim of contributing to the rising field of affective robot tutors we have developed ARTIE (Affective Robot Tutor Integrated Environment). We offer an architectural pattern which integrates any given educational software for primary school children with a component whose function is to identify the emotional state of the students who are interacting with the software, and with the driver of a robot tutor which provides personalized emotional pedagogical support to the students. In order to support the development of affective robot tutors according to the proposed architecture, we also provide a methodology which incorporates a technique for eliciting pedagogical knowledge from teachers, and a generic development platform. This platform contains a component for identiying emotional states by analysing keyboard and mouse interaction data, and a generic affective pedagogical support component which specifies the affective educational interventions (including facial expressions, body language, tone of voice,…) in terms of BML (a Behavior Model Language for virtual agent specification) files which are translated into actions of a robot tutor. The platform and the methodology are both adapted to primary school students. Finally, we illustrate the use of this platform to build a prototype implementation of the architecture, in which the educational software is instantiated with Scratch and the robot tutor with NAO. We also report on a user experiment we carried out to orient the development of the platform and of the prototype. We conclude from our work that, in the case of primary school students, it is possible to identify, without using intrusive and expensive identification methods, the emotions which most affect the character of educational interventions. Our work also demonstrates the feasibility of a general-purpose architecture of decoupled components, in which a wide range of educational software and robot tutors can be integrated and then used according to different educational criteria. PMID:27536230

  6. ARTIE: An Integrated Environment for the Development of Affective Robot Tutors.

    PubMed

    Imbernón Cuadrado, Luis-Eduardo; Manjarrés Riesco, Ángeles; De La Paz López, Félix

    2016-01-01

    Over the last decade robotics has attracted a great deal of interest from teachers and researchers as a valuable educational tool from preschool to highschool levels. The implementation of social-support behaviors in robot tutors, in particular in the emotional dimension, can make a significant contribution to learning efficiency. With the aim of contributing to the rising field of affective robot tutors we have developed ARTIE (Affective Robot Tutor Integrated Environment). We offer an architectural pattern which integrates any given educational software for primary school children with a component whose function is to identify the emotional state of the students who are interacting with the software, and with the driver of a robot tutor which provides personalized emotional pedagogical support to the students. In order to support the development of affective robot tutors according to the proposed architecture, we also provide a methodology which incorporates a technique for eliciting pedagogical knowledge from teachers, and a generic development platform. This platform contains a component for identiying emotional states by analysing keyboard and mouse interaction data, and a generic affective pedagogical support component which specifies the affective educational interventions (including facial expressions, body language, tone of voice,…) in terms of BML (a Behavior Model Language for virtual agent specification) files which are translated into actions of a robot tutor. The platform and the methodology are both adapted to primary school students. Finally, we illustrate the use of this platform to build a prototype implementation of the architecture, in which the educational software is instantiated with Scratch and the robot tutor with NAO. We also report on a user experiment we carried out to orient the development of the platform and of the prototype. We conclude from our work that, in the case of primary school students, it is possible to identify, without using intrusive and expensive identification methods, the emotions which most affect the character of educational interventions. Our work also demonstrates the feasibility of a general-purpose architecture of decoupled components, in which a wide range of educational software and robot tutors can be integrated and then used according to different educational criteria.

  7. Micro-intestinal robot with wireless power transmission: design, analysis and experiment.

    PubMed

    Shi, Yu; Yan, Guozheng; Chen, Wenwen; Zhu, Bingquan

    2015-11-01

    Video capsule endoscopy is a useful tool for noninvasive intestinal detection, but it is not capable of active movement; wireless power is an effective solution to this problem. The research in this paper consists of two parts: the mechanical structure which enables the robot to move smoothly inside the intestinal tract, and the wireless power supply which ensures efficiency. First, an intestinal robot with leg architectures was developed based on the Archimedes spiral, which mimics the movement of an inchworm. The spiral legs were capable of unfolding to an angle of approximately 155°, which guaranteed stability of clamping, consistency of surface pressure, and avoided the risk of puncturing the intestinal tract. Secondly, the necessary power to operate the robot was far beyond the capacity of button batteries, so a wireless power transmission (WPT) platform was developed. The design of the platform focused on power transfer efficiency and frequency stability. In addition, the safety of human tissue in the alternating electromagnetic field was also taken into consideration. Finally, the assembled robot was tested and verified with the use of the WPT platform. In the isolated intestine, the robot system successfully traveled along the intestine with an average speed of 23 mm per minute. The obtained videos displayed a resolution of 320 × 240 and a transmission rate of 30 frames per second. The WPT platform supplied up to 500 mW of energy to the robot, and achieved a power transfer efficiency of 12%. It has been experimentally verified that the intestinal robot is safe and effective as an endoscopy tool, for which wireless power is feasible. Proposals for further improving the robot and wireless power supply are provided later in this paper. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Approach-Phase Precision Landing with Hazard Relative Navigation: Terrestrial Test Campaign Results of the Morpheus/ALHAT Project

    NASA Technical Reports Server (NTRS)

    Crain, Timothy P.; Bishop, Robert H.; Carson, John M., III; Trawny, Nikolas; Hanak, Chad; Sullivan, Jacob; Christian, John; DeMars, Kyle; Campbell, Tom; Getchius, Joel

    2016-01-01

    The Morpheus Project began in late 2009 as an ambitious e ort code-named Project M to integrate three ongoing multi-center NASA technology developments: humanoid robotics, liquid oxygen/liquid methane (LOX/LCH4) propulsion and Autonomous Precision Landing and Hazard Avoidance Technology (ALHAT) into a single engineering demonstration mission to be own to the Moon by 2013. The humanoid robot e ort was redirected to a deploy- ment of Robonaut 2 on the International Space Station in February of 2011 while Morpheus continued as a terrestrial eld test project integrating the existing ALHAT Project's tech- nologies into a sub-orbital ight system using the world's rst LOX/LCH4 main propulsion and reaction control system fed from the same blowdown tanks. A series of 33 tethered tests with the Morpheus 1.0 vehicle and Morpheus 1.5 vehicle were conducted from April 2011 - December 2013 before successful, sustained free ights with the primary Vertical Testbed (VTB) navigation con guration began with Free Flight 3 on December 10, 2013. Over the course of the following 12 free ights and 3 tethered ights, components of the ALHAT navigation system were integrated into the Morpheus vehicle, operations, and ight control loop. The ALHAT navigation system was integrated and run concurrently with the VTB navigation system as a reference and fail-safe option in ight (see touchdown position esti- mate comparisons in Fig. 1). Flight testing completed with Free Flight 15 on December 15, 2014 with a completely autonomous Hazard Detection and Avoidance (HDA), integration of surface relative and Hazard Relative Navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman lter software, and landing within 2 meters of the VTB GPS-based navigation solution at the safe landing site target. This paper describes the Mor- pheus joint VTB/ALHAT navigation architecture, the sensors utilized during the terrestrial ight campaign, issues resolved during testing, and the navigation results from the ight tests.

  9. Analysis and design of a six-degree-of-freedom Stewart platform-based robotic wrist

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami; Zhou, Zhen-Lei

    1991-01-01

    The kinematic analysis and implementation of a six degree of freedom robotic wrist which is mounted to a general open-kinetic chain manipulator to serve as a restbed for studying precision robotic assembly in space is discussed. The wrist design is based on the Stewart Platform mechanism and consists mainly of two platforms and six linear actuators driven by DC motors. Position feedback is achieved by linear displacement transducers mounted along the actuators and force feedback is obtained by a 6 degree of freedom force sensor mounted between the gripper and the payload platform. The robot wrist inverse kinematics which computes the required actuator lengths corresponding to Cartesian variables has a closed-form solution. The forward kinematics is solved iteratively using the Newton-Ralphson method which simultaneously provides a modified Jacobian Matrix which relates length velocities to Cartesian translational velocities and time rates of change of roll-pitch-yaw angles. Results of computer simulation conducted to evaluate the efficiency of the forward kinematics and Modified Jacobian Matrix are discussed.

  10. A Robotic Platform to Study the Foreflipper of the California Sea Lion.

    PubMed

    Kulkarni, Aditya A; Patel, Rahi K; Friedman, Chen; Leftwich, Megan C

    2017-01-10

    The California sea lion (Zalophus californianus), is an agile and powerful swimmer. Unlike many successful swimmers (dolphins, tuna), they generate most of their thrust with their large foreflippers. This protocol describes a robotic platform designed to study the hydrodynamic performance of the swimming California sea lion (Zalophus californianus). The robot is a model of the animal's foreflipper that is actuated by motors to replicate the motion of its propulsive stroke (the 'clap'). The kinematics of the sea lion's propulsive stroke are extracted from video data of unmarked, non-research sea lions at the Smithsonian Zoological Park (SNZ). Those data form the basis of the actuation motion of the robotic flipper presented here. The geometry of the robotic flipper is based a on high-resolution laser scan of a foreflipper of an adult female sea lion, scaled to about 60% of the full-scale flipper. The articulated model has three joints, mimicking the elbow, wrist and knuckle joint of the sea lion foreflipper. The robotic platform matches dynamics properties-Reynolds number and tip speed-of the animal when accelerating from rest. The robotic flipper can be used to determine the performance (forces and torques) and resulting flowfields.

  11. A haptic sensing upgrade for the current EOD robotic fleet

    NASA Astrophysics Data System (ADS)

    Rowe, Patrick

    2014-06-01

    The past decade and a half has seen a tremendous rise in the use of mobile manipulator robotic platforms for bomb inspection and disposal, explosive ordnance disposal, and other extremely hazardous tasks in both military and civilian settings. Skilled operators are able to control these robotic vehicles in amazing ways given the very limited situational awareness obtained from a few on-board camera views. Future generations of robotic platforms will, no doubt, provide some sort of additional force or haptic sensor feedback to further enhance the operator's interaction with the robot, especially when dealing with fragile, unstable, and explosive objects. Unfortunately, the robot operators need this capability today. This paper discusses an approach to provide existing (and future) robotic mobile manipulator platforms, with which trained operators are already familiar and highly proficient, this desired haptic and force feedback capability. The goals of this technology are to be rugged, reliable, and affordable. It should also be able to be applied to a wide range of existing robots with a wide variety of manipulator/gripper sizes and styles. Finally, the presentation of the haptic information to the operator is discussed, given the fact that control devices that physically interact with the operators are not widely available and still in the research stages.

  12. Towards multi-platform software architecture for Collaborative Teleoperation

    NASA Astrophysics Data System (ADS)

    Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik

    2009-03-01

    Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.

  13. Robotic right colectomy using the Da Vinci Single-Site® platform: case report.

    PubMed

    Morelli, Luca; Guadagni, Simone; Caprili, Giovanni; Di Candio, Giulio; Boggi, Ugo; Mosca, Franco

    2013-09-01

    While single-port laparoscopy for abdominal surgery is technically challenging, the Da Vinci Single-Site® robotic surgery platform may help to overcome some of the difficulties of this rapidly evolving technique. The authors of this article present a case of single-incision, robotic right colectomy using this device. A 74-year-old female with malignant polyp of caecum was operated on with a single-site approach using the Da Vinci Single-Site® robotic surgery device. Resection and anastomosis were performed extra-corporeally after undocking the robot. The procedure was successfully completed in 200 min. No surgical complications occurred during the intervention and the post-operative stay and no conversion to laparotomy or additional trocars were required. To the best of our knowledge, this is the first case of right colectomy using the Da Vinci Single-Site® robotic surgery platform to be reported. The procedure is feasible and safe and its main advantages are restoration of triangulation and reduced instrument clashes. Copyright © 2013 John Wiley & Sons, Ltd.

  14. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  15. Framework and Implications of Virtual Neurorobotics

    PubMed Central

    Goodman, Philip H.; Zou, Quan; Dascalu, Sergiu-Mihai

    2008-01-01

    Despite decades of societal investment in artificial learning systems, truly “intelligent” systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain “algorithm” itself—trying to replicate uniquely “neuromorphic” dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain's interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or “avatars”, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR) in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection) are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications. PMID:18982115

  16. Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development.

    PubMed

    Morse, Anthony F; Cangelosi, Angelo

    2017-02-01

    Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. Copyright © 2016 Cognitive Science Society, Inc.

  17. Robotic-assisted single-port donor nephrectomy using the da Vinci single-site platform.

    PubMed

    LaMattina, John C; Alvarez-Casas, Josue; Lu, Irene; Powell, Jessica M; Sultan, Samuel; Phelan, Michael W; Barth, Rolf N

    2018-02-01

    Although single-port donor nephrectomy offers improved cosmetic outcomes, technical challenges have limited its application to selected centers. Our center has performed over 400 single-port donor nephrectomies. The da Vinci single-site robotic platform was utilized in an effort to overcome the steric, visualization, ergonomic, and other technical limitations associated with the single-port approach. Food and Drug Administration device exemption was obtained. Selection criteria for kidney donation included body mass index <35, left kidney donors, and ≤2 renal arteries. After colonic mobilization using standard single-port techniques, the robotic approach was utilized for ureteral complex and hilar dissection. Three cases were performed using the robotic single-site platform. Average total operative time was 262 ± 42 min including 82 ± 16 min of robotic use. Docking time took 20 ± 10 min. Blood loss averaged 77 ± 64 mL. No intraoperative complications occurred, and all procedures were completed with our standard laparoscopic single-port approach. This is the first clinical experience of robotic-assisted donor nephrectomy utilizing the da Vinci single-site platform. Our experience supported the safety of this approach but found that the technology added cost and complexity without tangible benefit. Development of articulating instruments, energy, and stapling devices will be necessary for increased application of robotic single-site surgery for donor nephrectomy. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A neurorobotic platform for locomotor prosthetic development in rats and mice

    NASA Astrophysics Data System (ADS)

    von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire

    2016-04-01

    Objectives. We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. Approach. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Main Results. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. Significance. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.

  19. A neurorobotic platform for locomotor prosthetic development in rats and mice.

    PubMed

    von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire

    2016-04-01

    We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.

  20. Kinematic evaluation of mobile robotic platforms for overground gait neurorehabilitation

    NASA Astrophysics Data System (ADS)

    Alias, N. Akmal; Huq, M. Saiful; Ibrahim, B. S. K. K.; Omar, Rosli

    2017-09-01

    Gait assistive devices offer a great solution to the walking re-education which reduce patients theoretical limit by aiding the anatomical joints to be in line with the rehabilitation session. Overground gait training, which is differs significantly from body-weight supported treadmill training in many aspects, essentially consists of a mobile robotic base to support the subject securely (usually with overhead harness) while its motion and orientation is controlled seamlessly to facilitate subjects free movement. In this study, efforts have been made for evaluation of both holonomic and nonholonomic drives, the outcome of which may constitute the primarily results to the effective approach in designing a robotic platform for the mobile rehabilitation robot. The sets of kinematic equations are derived using typical geometries of two different drives. The results indicate that omnidirectional mecanum wheel platform is capable for more sophisticated discipline. Although the differential drive platform happens to be more simple and easy to construct, but it is less desirable as it has limited number of motions applicable to the system. The omnidirectional robot consisting of mecanum wheels, which is classified as holonomic is potentially the best solution in terms of its capability to move in arbitrary direction without concerning the changing of wheel's direction.

  1. Towards Optimal Platform-Based Robot Design for Ankle Rehabilitation: The State of the Art and Future Prospects

    PubMed Central

    Li, Hongsheng

    2018-01-01

    This review aims to compare existing robot-assisted ankle rehabilitation techniques in terms of robot design. Included studies mainly consist of selected papers in two published reviews involving a variety of robot-assisted ankle rehabilitation techniques. A free search was also made in Google Scholar and Scopus by using keywords “ankle∗,” and “robot∗,” and (“rehabilitat∗” or “treat∗”). The search is limited to English-language articles published between January 1980 and September 2016. Results show that existing robot-assisted ankle rehabilitation techniques can be classified into wearable exoskeleton and platform-based devices. Platform-based devices are mostly developed for the treatment of a variety of ankle musculoskeletal and neurological injuries, while wearable ones focus more on ankle-related gait training. In terms of robot design, comparative analysis indicates that an ideal ankle rehabilitation robot should have aligned rotation center as the ankle joint, appropriate workspace, and actuation torque, no matter how many degrees of freedom (DOFs) it has. Single-DOF ankle robots are mostly developed for specific applications, while multi-DOF devices are more suitable for comprehensive ankle rehabilitation exercises. Other factors including posture adjustability and sensing functions should also be considered to promote related clinical applications. An ankle rehabilitation robot with reconfigurability to maximize its functions will be a new research point towards optimal design, especially on parallel mechanisms. PMID:29736230

  2. New Developments in Robotics and Single-site Gynecologic Surgery.

    PubMed

    Matthews, Catherine A

    2017-06-01

    Within the last 10 years there have been significant advances in minimal-access surgery. Although no emerging technology has demonstrated improved outcomes or fewer complications than standard laparoscopy, the introduction of the robotic surgical platform has significantly lowered abdominal hysterectomy rates. While operative time and cost were higher in robotic-assisted procedures when the technology was first introduced, newer studies demonstrate equivalent or improved robotic surgical efficiency with increased experience. Single-port hysterectomy has not improved postoperative pain or subjective cosmetic results. Emerging platforms with flexible, articulating instruments may increase the uptake of single-port procedures including natural orifice transluminal endoscopic cases.

  3. Robotic vehicle with multiple tracked mobility platforms

    DOEpatents

    Salton, Jonathan R [Albuquerque, NM; Buttz, James H [Albuquerque, NM; Garretson, Justin [Albuquerque, NM; Hayward, David R [Wetmore, CO; Hobart, Clinton G [Albuquerque, NM; Deuel, Jr., Jamieson K.

    2012-07-24

    A robotic vehicle having two or more tracked mobility platforms that are mechanically linked together with a two-dimensional coupling, thereby forming a composite vehicle of increased mobility. The robotic vehicle is operative in hazardous environments and can be capable of semi-submersible operation. The robotic vehicle is capable of remote controlled operation via radio frequency and/or fiber optic communication link to a remote operator control unit. The tracks have a plurality of track-edge scallop cut-outs that allow the tracks to easily grab onto and roll across railroad tracks, especially when crossing the railroad tracks at an oblique angle.

  4. Curb Mounting, Vertical Mobility, and Inverted Mobility on Rough Surfaces Using Microspine-Enabled Robots

    NASA Technical Reports Server (NTRS)

    Parness, Aaron

    2012-01-01

    Three robots that extend microspine technology to enable advanced mobility are presented. First, the Durable Reconnaissance and Observation Platform (DROP) and the ReconRobotics Scout platform use a new rotary configuration of microspines to provide improved soldier-portable reconnaissance by moving rapidly over curbs and obstacles, transitioning from horizontal to vertical surfaces, climbing rough walls and surviving impacts. Next, the four-legged LEMUR robot uses new configurations of opposed microspines to anchor to both manmade and natural rough surfaces. Using these anchors as feet enables mobility in unstructured environments, from urban disaster areas to deserts and caves.

  5. Review of contemporary role of robotics in bariatric surgery

    PubMed Central

    Bindal, Vivek; Bhatia, Parveen; Dudeja, Usha; Kalhan, Sudhir; Khetan, Mukund; John, Suviraj; Wadhera, Sushant

    2015-01-01

    With the rise in a number of bariatric procedures, surgeons are facing more complex and technically demanding surgical situations. Robotic digital platforms potentially provide a solution to better address these challenges. This review examines the published literature on the outcomes and complications of bariatric surgery using a robotic platform. Use of robotics to perform adjustable gastric banding, sleeve gastrectomy, roux-en-y gastric bypass (RYGB), biliopancreatic diversion with duodenal switch and revisional bariatric procedures (RBP) is assessed. A search on PubMed was performed for the most relevant articles in robotic bariatric surgery. A total of 23 articles was selected and reviewed in this article. The review showed that the use of robotics led to similar or lower complication rate in bariatric surgery when compared with laparoscopy. Two studies found a significantly lower leak rate for robotic gastric bypass when compared to laparoscopic method. The learning curve for RYGB seems to be shorter for robotic technique. Three studies revealed a significantly shorter operative time, while four studies found a longer operative time for robotic technique of gastric bypass. As for the outcomes of RBP, one study found a lower complication rate in robotic arm versus laparoscopic and open arms. Most authors stated that the use of robotics provides superior visualisation, more degrees of freedom and better ergonomics. The application of robotics in bariatric surgery seems to be a safe and feasible option. Use of robotics may provide specific advantages in some situations, and overcome limitations of laparoscopic surgery. Large and well-designed randomised clinical trials with long follow-up are needed to further define the role of digital platforms in bariatric surgery. PMID:25598594

  6. THE DECADE OF THE RABiT (2005–15)

    PubMed Central

    Garty, G.; Turner, H. C.; Salerno, A.; Bertucci, A.; Zhang, J.; Chen, Y.; Dutta, A.; Sharma, P.; Bian, D.; Taveras, M.; Wang, H.; Bhatla, A.; Balajee, A.; Bigelow, A. W.; Repin, M.; Lyulko, O. V.; Simaan, N.; Yao, Y. L.; Brenner, D. J.

    2016-01-01

    The RABiT (Rapid Automated Biodosimetry Tool) is a dedicated Robotic platform for the automation of cytogenetics-based biodosimetry assays. The RABiT was developed to fulfill the critical requirement for triage following a mass radiological or nuclear event. Starting from well-characterized and accepted assays we developed a custom robotic platform to automate them. We present here a brief historical overview of the RABiT program at Columbia University from its inception in 2005 until the RABiT was dismantled at the end of 2015. The main focus of this paper is to demonstrate how the biological assays drove development of the custom robotic systems and in turn new advances in commercial robotic platforms inspired small modifications in the assays to allow replacing customized robotics with ‘off the shelf’ systems. Currently, a second-generation, RABiT II, system at Columbia University, consisting of a PerkinElmer cell::explorer, was programmed to perform the RABiT assays and is undergoing testing and optimization studies. PMID:27412510

  7. A Raman spectroscopy bio-sensor for tissue discrimination in surgical robotics.

    PubMed

    Ashok, Praveen C; Giardini, Mario E; Dholakia, Kishan; Sibbett, Wilson

    2014-01-01

    We report the development of a fiber-based Raman sensor to be used in tumour margin identification during endoluminal robotic surgery. Although this is a generic platform, the sensor we describe was adapted for the ARAKNES (Array of Robots Augmenting the KiNematics of Endoluminal Surgery) robotic platform. On such a platform, the Raman sensor is intended to identify ambiguous tissue margins during robot-assisted surgeries. To maintain sterility of the probe during surgical intervention, a disposable sleeve was specially designed. A straightforward user-compatible interface was implemented where a supervised multivariate classification algorithm was used to classify different tissue types based on specific Raman fingerprints so that it could be used without prior knowledge of spectroscopic data analysis. The protocol avoids inter-patient variability in data and the sensor system is not restricted for use in the classification of a particular tissue type. Representative tissue classification assessments were performed using this system on excised tissue. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. How to prepare the patient for robotic surgery: before and during the operation.

    PubMed

    Lim, Peter C; Kang, Elizabeth

    2017-11-01

    Robotic surgery in the treatment of gynecologic diseases continues to evolve and has become accepted over the last decade. The advantages of robotic-assisted laparoscopic surgery over conventional laparoscopy are three-dimensional camera vision, superior precision and dexterity with EndoWristed instruments, elimination of operator tremor, and decreased surgeon fatigue. The drawbacks of the technology are bulkiness and lack of tactile feedback. As with other surgical platforms, the limitations of robotic surgery must be understood. Patient selection and the types of surgical procedures that can be performed through the robotic surgical platform are critical to the success of robotic surgery. First, patient selection and the indication for gynecologic disease should be considered. Discussion with the patient regarding the benefits and potential risks of robotic surgery and of complications and alternative treatments is mandatory, followed by patient's signature indicating informed consent. Appropriate preoperative evaluation-including laboratory and imaging tests-and bowel cleansing should be considered depending upon the type of robotic-assisted procedure. Unlike other surgical procedures, robotic surgery is equipment-intensive and requires an appropriate surgical suite to accommodate the patient side cart, the vision system, and the surgeon's console. Surgical personnel must be properly trained with the robotics technology. Several factors must be considered to perform a successful robotic-assisted surgery: the indication and type of surgical procedure, the surgical platform, patient position and the degree of Trendelenburg, proper port placement configuration, and appropriate instrumentation. These factors that must be considered so that patients can be appropriately prepared before and during the operation are described. Copyright © 2017. Published by Elsevier Ltd.

  9. Applications for the MATILDA robotic platform

    NASA Astrophysics Data System (ADS)

    Munkeby, Steve H.; Jones, Don; Bugg, George; Smith, Kathryn

    2002-07-01

    Most robotic platforms have, up to this point, been designed with emphasis placed on improving mobility technologies. Minimal emphasis has been placed on payloads and mission execution. Using a top-down approach, Mesa Associates, Inc. identified specific UGV mission applications and structured its MATILDA platform using these applications for vehicle mobility and motion control requirements. Specific applications identified for the MATILDA platform include: Target surveillance, explosive device neutralization, material pickup and transport, weapon transport and firing, and law enforcement. Current performance results, lessons-learned, technical hurdles, and future applications are examined.

  10. KSC-2010-4444

    NASA Image and Video Library

    2010-08-20

    CAPE CANAVERAL, Fla. -- Technicians in the Space Station Processing Facility at NASA's Kennedy Space Center in Florida prepare to load the dexterous humanoid astronaut helper, Robonaut 2, or R2, into the Permanent Multipurpose Module, or PMM. Packed inside a launch box called SLEEPR, or Structural Launch Enclosure to Effectively Protect Robonaut, R2 will be placed in the in the same launch orientation as space shuttle Discovery's STS-133 crew members -- facing toward the nose of the shuttle with the back taking all the weight. Although R2 will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the International Space Station. STS-133 is targeted to launch Nov. 1. Photo credit: NASA/Frankie Martin

  11. Development, fabrication, and modeling of highly sensitive conjugated polymer based piezoresistive sensors in electronic skin applications

    NASA Astrophysics Data System (ADS)

    Khalili, Nazanin; Naguib, Hani E.; Kwon, Roy H.

    2016-04-01

    Human intervention can be replaced through development of tools resulted from utilizing sensing devices possessing a wide range of applications including humanoid robots or remote and minimally invasive surgeries. Similar to the five human senses, sensors interface with their surroundings to stimulate a suitable response or action. The sense of touch which arises in human skin is among the most challenging senses to emulate due to its ultra high sensitivity. This has brought forth novel challenging issues to consider in the field of biomimetic robotics. In this work, using a multiphase reaction, a polypyrrole (PPy) based hydrogel is developed as a resistive type pressure sensor with an intrinsically elastic microstructure stemming from three dimensional hollow spheres. Furthermore, a semi-analytical constriction resistance model accounting for the real contact area between the PPy hydrogel sensors and the electrode along with the dependency of the contact resistance change on the applied load is developed. The model is then solved using a Monte Carlo technique and the sensitivity of the sensor is obtained. The experimental results showed the good tracking ability of the proposed model.

  12. Humanoid monocular stereo measuring system with two degrees of freedom using bionic optical imaging system

    NASA Astrophysics Data System (ADS)

    Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang

    2017-10-01

    Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.

  13. Robotic Precursor Missions for Mars Habitats

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Pirjanian, Paolo; Schenker, Paul S.; Trebi-Ollennu, Ashitey; Das, Hari; Joshi, Sajay

    2000-01-01

    Infrastructure support for robotic colonies, manned Mars habitat, and/or robotic exploration of planetary surfaces will need to rely on the field deployment of multiple robust robots. This support includes such tasks as the deployment and servicing of power systems and ISRU generators, construction of beaconed roadways, and the site preparation and deployment of manned habitat modules. The current level of autonomy of planetary rovers such as Sojourner will need to be greatly enhanced for these types of operations. In addition, single robotic platforms will not be capable of complicated construction scenarios. Precursor robotic missions to Mars that involve teams of multiple cooperating robots to accomplish some of these tasks is a cost effective solution to the possible long timeline necessary for the deployment of a manned habitat. Ongoing work at JPL under the Mars Outpost Program in the area of robot colonies is investigating many of the technology developments necessary for such an ambitious undertaking. Some of the issues that are being addressed include behavior-based control systems for multiple cooperating robots (CAMPOUT), development of autonomous robotic systems for the rescue/repair of trapped or disabled robots, and the design and development of robotic platforms for construction tasks such as material transport and surface clearing.

  14. Raven-II: an open platform for surgical robotics research.

    PubMed

    Hannaford, Blake; Rosen, Jacob; Friedman, Diana W; King, Hawkeye; Roan, Phillip; Cheng, Lei; Glozman, Daniel; Ma, Ji; Kosari, Sina Nia; White, Lee

    2013-04-01

    The Raven-II is a platform for collaborative research on advances in surgical robotics. Seven universities have begun research using this platform. The Raven-II system has two 3-DOF spherical positioning mechanisms capable of attaching interchangeable four DOF instruments. The Raven-II software is based on open standards such as Linux and ROS to maximally facilitate software development. The mechanism is robust enough for repeated experiments and animal surgery experiments, but is not engineered to sufficient safety standards for human use. Mechanisms in place for interaction among the user community and dissemination of results include an electronic forum, an online software SVN repository, and meetings and workshops at major robotics conferences.

  15. EEG theta and Mu oscillations during perception of human and robot actions

    PubMed Central

    Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.

    2013-01-01

    The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375

  16. EEG theta and Mu oscillations during perception of human and robot actions.

    PubMed

    Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P

    2013-01-01

    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

  17. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Adding navigation, artificial audition and vital sign monitoring capabilities to a telepresence mobile robot for remote home care applications.

    PubMed

    Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois

    2017-07-01

    A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.

  19. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  20. Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    NASA Technical Reports Server (NTRS)

    Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel

    2015-01-01

    The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.

  1. A novel robotic platform for single-port abdominal surgery

    NASA Astrophysics Data System (ADS)

    Singh, Satwinder; Cheung, Jo L. K.; Sreedhar, Biji; Hoa, Xuyen Dai; Ng, Hoi Pang; Yeung, Chung Kwong

    2018-03-01

    In this paper, a novel robot-assisted platform for single-port minimally invasive surgery is presented. A miniaturized seven degrees of freedom (dof) fully internalized in-vivo actuated robotic arm is designed. Due to in-vivo actuation, the system has a smaller footprint and can generate 20 N of gripping force. The complete work envelop of the robotic arms is 252 mm × 192 mm × 322 m. With the assistance of the cannula-swivel system, the robotic arms can also be re-positioned and have multi-quadrant reachability without any additional incision. Surgical tasks, such as lifting, gripping suturing and knot tying that are commonly used in a standard surgical procedure, were performed to verify the dexterity of the robotic arms. A single-port trans-abdominal cholecystectomy in a porcine model was successfully performed to further validate its functionality.

  2. Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales

    NASA Astrophysics Data System (ADS)

    Peschel, J.; Young, S. N.

    2017-12-01

    The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.

  3. Precision instrument placement using a 4-DOF robot with integrated fiducials for minimally invasive interventions

    NASA Astrophysics Data System (ADS)

    Stenzel, Roland; Lin, Ralph; Cheng, Peng; Kronreif, Gernot; Kornfeld, Martin; Lindisch, David; Wood, Bradford J.; Viswanathan, Anand; Cleary, Kevin

    2007-03-01

    Minimally invasive procedures are increasingly attractive to patients and medical personnel because they can reduce operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a very limited view of the interventional field and the exact position of surgical instruments. We present an image-guided platform for precision placement of surgical instruments based upon a small four degree-of-freedom robot (B-RobII; ARC Seibersdorf Research GmbH, Vienna, Austria). This platform includes a custom instrument guide with an integrated spiral fiducial pattern as the robot's end-effector, and it uses intra-operative computed tomography (CT) to register the robot to the patient directly before the intervention. The physician can then use a graphical user interface (GUI) to select a path for percutaneous access, and the robot will automatically align the instrument guide along this path. Potential anatomical targets include the liver, kidney, prostate, and spine. This paper describes the robotic platform, workflow, software, and algorithms used by the system. To demonstrate the algorithmic accuracy and suitability of the custom instrument guide, we also present results from experiments as well as estimates of the maximum error between target and instrument tip.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin's maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Pin, F.G.

    This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin`s maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less

  6. Role of Robotics in Children: A brave New World!

    PubMed

    Spinoit, Anne-Françoise; Nguyen, Hiep; Subramaniam, Ramnath

    2017-04-01

    The key in the evolution towards minimally invasive surgery is the availability of appropriate equipment, especially when procedures involve children. While robotic procedures in adults continue to struggle to prove measurable advantages compared with open or classical laparoscopic ones, the use of the robotic platform (RP) in pediatric urology is steadily increasing. To review the contemporary literature regarding the use of robotic-assisted (RA) urologic interventions in children. A nonsystematic review of the literature was conducted through PubMed database between 2002 and 2017, with an emphasis on large series. A few major challenges must be considered before using the RP in children: anesthesia, placement of trocars, and technical difficulties related to small space. To date, only the robot-assisted pyeloplasty is recognized as safe and efficient with an equivalent outcome compared to the open or classical laparoscopy; this was supported by large multicentric studies, which are not available for most of the other procedures. RA procedure in children has been proven safe and effective. Still in its infancy, further data over time is likely to prove different RA procedures to be equivalent to open or laparoscopy in terms of outcome. The advent of the robotic platform means an evolution towards minimizing surgical trauma for the child. Currently, the available platforms designed for adults are adapted to work in children. However, it might be expected in the future that new technologies will improve the technical possibilities to improve the robotic platform for minimally invasive surgery in children. To date, a few applications are considered safe and efficient (in experienced hands), considering that the team has to be aware of some challenges to overcome regarding anesthesia, material, and technique adaptation to the patient. The most accepted robotic applications in children comprises of the robot-assisted pyeloplasty, hemi-nephrectomy, and ureteric reimplantation. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  7. Posture Affects How Robots and Infants Map Words to Objects

    PubMed Central

    Morse, Anthony F.; Benitez, Viridian L.; Belpaeme, Tony; Cangelosi, Angelo; Smith, Linda B.

    2015-01-01

    For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space. PMID:25785834

  8. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.

  9. Development and Validation of a Novel Robotic Procedure Specific Simulation Platform: Partial Nephrectomy.

    PubMed

    Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S

    2015-08-01

    We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  10. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  11. Industrial-Like Vehicle Platforms for Postgraduate Laboratory Courses on Robotics

    ERIC Educational Resources Information Center

    Navarro, P. J.; Fernandez, C.; Sanchez, P.

    2013-01-01

    The interdisciplinary nature of robotics allows mobile robots to be used successfully in a broad range of courses at the postgraduate level and in Ph.D. research. Practical industrial-like mobile robotic demonstrations encourage students and increase their motivation by providing them with learning benefits not achieved with traditional…

  12. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration

    PubMed Central

    Losada, Diego P.; Fernández, Joaquín L.; Paz, Enrique; Sanz, Rafael

    2017-01-01

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead. PMID:28467381

  13. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.

    PubMed

    Losada, Diego P; Fernández, Joaquín L; Paz, Enrique; Sanz, Rafael

    2017-05-03

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead.

  14. ATHLETE as a Mobile ISRU and Regolith Construction Platform

    NASA Technical Reports Server (NTRS)

    Howe, A. Scott; Wilcox, Brian; Barmatz, Martin; Voecks, Gerald

    2016-01-01

    The All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robotic mobility platform can provide precision positioning and mobility for site preparation and regolith construction needs. ATHLETE is a multi-use platform designed to use swap-out tools and implements that can be applied to any number of tasks that need precision limb manipulation or mobility. Major capabilities include off-loading habitats, transporting surface assets, robotically assembling outposts from multiple mission manifests, and supporting science and technology objectives. This paper describes conceptual approaches for supporting NASA regolith construction research, such as additive construction, modular brick and panel factory, and mobile ISRU platform.

  15. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  16. JacksonBot - Design, Simulation and Optimal Control of an Action Painting Robot

    NASA Astrophysics Data System (ADS)

    Raschke, Michael; Mombaur, Katja; Schubert, Alexander

    We present the robotics platform JacksonBot which is capable to produce paintings inspired by the Action Painting style of Jackson Pollock. A dynamically moving robot arm splashes color from a container at the end effector on the canvas. The paintings produced by this platform rely on a combination of the algorithmic generation of robot arm motions with random effects of the splashing color. The robot can be considered as a complex and powerful tool to generate art works programmed by a user. Desired end effector motions can be prescribed either by mathematical functions, by point sequences or by data glove motions. We have evaluated the effect of different shapes of input motions on the resulting painting. In order to compute the robot joint trajectories necessary to move along a desired end effector path, we use an optimal control based approach to solve the inverse kinematics problem.

  17. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  18. Direct target NOTES: prospective applications for next generation robotic platforms.

    PubMed

    Atallah, S; Hodges, A; Larach, S W

    2018-05-01

    A new era in surgical robotics has centered on alternative access to anatomic targets and next generation designs include flexible, single-port systems which follow circuitous rather than straight pathways. Such systems maintain a small footprint and could be utilized for specialized operations based on direct organ target natural orifice transluminal endoscopic surgery (NOTES), of which transanal total mesorectal excision (taTME) is an important derivative. During two sessions, four direct target NOTES operations were conducted on a cadaveric model using a flexible robotic system to demonstrate proof-of-concept of the application of a next generation robotic system to specific types of NOTES operations, all of which required removal of a direct target organ through natural orifice access. These four operations were (a) robotic taTME, (b) robotic transvaginal hysterectomy in conjunction with (c) robotic transvaginal salpingo-oophorectomy, and in an ex vivo model, (d) trans-cecal appendectomy. Feasibility was demonstrated in all cases using the Flex ® Robotic System with Colorectal Drive. During taTME, the platform excursion was 17 cm along a non-linear path; operative time was 57 min for the transanal portion of the dissection. Robotic transvaginal hysterectomy was successfully completed in 78 min with transvaginal extraction of the uterus, although laparoscopic assistance was required. Robotic transvaginal unilateral salpingo-oophorectomy with transvaginal extraction of the ovary and fallopian tube was performed without laparoscopic assistance in 13.5 min. In an ex vivo model, a robotic trans-cecal appendectomy was also successfully performed for the purpose of demonstrating proof-of-concept only; this was completed in 24 min. A flexible robotic system has the potential to access anatomy along circuitous paths, making it a suitable platform for direct target NOTES. The conceptual operations posed could be considered suitable for next generation robotics once the technology is optimized, and after further preclinical validation.

  19. PaR-PaR Laboratory Automation Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linshiz, G; Stawski, N; Poust, S

    2013-05-01

    Labor-intensive multistep biological tasks, such as the construction and cloning of DNA molecules, are prime candidates for laboratory automation. Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work. To achieve these goals, a simple high-level biology-friendly robot programming language is needed. We have developed and experimentally validated such a language: Programming a Robot (PaR-PaR). The syntax and compiler for the language are based on computer science principles and a deep understanding of biological workflows. PaR-PaRmore » allows researchers to use liquid-handling robots effectively, enabling experiments that would not have been considered previously. After minimal training, a biologist can independently write complicated protocols for a robot within an hour. Adoption of PaR-PaR as a standard cross-platform language would enable hand-written or software-generated robotic protocols to be shared across laboratories.« less

  20. PaR-PaR laboratory automation platform.

    PubMed

    Linshiz, Gregory; Stawski, Nina; Poust, Sean; Bi, Changhao; Keasling, Jay D; Hillson, Nathan J

    2013-05-17

    Labor-intensive multistep biological tasks, such as the construction and cloning of DNA molecules, are prime candidates for laboratory automation. Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work. To achieve these goals, a simple high-level biology-friendly robot programming language is needed. We have developed and experimentally validated such a language: Programming a Robot (PaR-PaR). The syntax and compiler for the language are based on computer science principles and a deep understanding of biological workflows. PaR-PaR allows researchers to use liquid-handling robots effectively, enabling experiments that would not have been considered previously. After minimal training, a biologist can independently write complicated protocols for a robot within an hour. Adoption of PaR-PaR as a standard cross-platform language would enable hand-written or software-generated robotic protocols to be shared across laboratories.

  1. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  2. Precision in robotic rectal surgery using the da Vinci Xi system and integrated table motion, a technical note.

    PubMed

    Panteleimonitis, Sofoklis; Harper, Mick; Hall, Stuart; Figueiredo, Nuno; Qureshi, Tahseen; Parvaiz, Amjad

    2017-09-15

    Robotic rectal surgery is becoming increasingly more popular among colorectal surgeons. However, time spent on robotic platform docking, arm clashing and undocking of the platform during the procedure are factors that surgeons often find cumbersome and time consuming. The newest surgical platform, the da Vinci Xi, coupled with integrated table motion can help to overcome these problems. This technical note aims to describe a standardised operative technique of single docking robotic rectal surgery using the da Vinci Xi system and integrated table motion. A stepwise approach of the da Vinci docking process and surgical technique is described accompanied by an intra-operative video that demonstrates this technique. We also present data collected from a prospectively maintained database. 33 consecutive rectal cancer patients (24 male, 9 female) received robotic rectal surgery with the da Vinci Xi during the preparation of this technical note. 29 (88%) patients had anterior resections, and four (12%) had abdominoperineal excisions. There were no conversions, no anastomotic leaks and no mortality. Median operation time was 331 (249-372) min, blood loss 20 (20-45) mls and length of stay 6.5 (4-8) days. 30-day readmission rate and re-operation rates were 3% (n = 1). This standardised technique of single docking robotic rectal surgery with the da Vinci Xi is safe, feasible and reproducible. The technological advances of the new robotic system facilitate the totally robotic single docking approach.

  3. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  4. Outcomes of a virtual-reality simulator-training programme on basic surgical skills in robot-assisted laparoscopic surgery.

    PubMed

    Phé, Véronique; Cattarino, Susanna; Parra, Jérôme; Bitker, Marc-Olivier; Ambrogi, Vanina; Vaessen, Christophe; Rouprêt, Morgan

    2017-06-01

    The utility of the virtual-reality robotic simulator in training programmes has not been clearly evaluated. Our aim was to evaluate the impact of a virtual-reality robotic simulator-training programme on basic surgical skills. A simulator-training programme in robotic surgery, using the da Vinci Skills Simulator, was evaluated in a population including junior and seasoned surgeons, and non-physicians. Their performances on robotic dots and suturing-skin pod platforms before and after virtual-simulation training were rated anonymously by surgeons experienced in robotics. 39 participants were enrolled: 14 medical students and residents in surgery, 14 seasoned surgeons, 11 non-physicians. Junior and seasoned surgeons' performances on platforms were not significantly improved after virtual-reality robotic simulation in any of the skill domains, in contrast to non-physicians. The benefits of virtual-reality simulator training on several tasks to basic skills in robotic surgery were not obvious among surgeons in our initial and early experience with the simulator. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Design and evaluation of NEUROBike: a neurorehabilitative platform for bedridden post-stroke patients.

    PubMed

    Monaco, Vito; Galardi, Giuseppe; Coscia, Martina; Martelli, Dario; Micera, Silvestro

    2012-11-01

    Over the past decades, a large number of robotic platforms have been developed which provide rehabilitative treatments aimed at recovering walking abilities in post-stroke patients. Unfortunately, they do not significantly influence patients' performance after three months from the accident. One of the main reasons underlying this result seems to be related to the time of intervention. Specifically, although experimental evidences suggest that early (i.e., first days after the injury) and intense neuro-rehabilitative treatments can significantly favor the functional recovery of post-stroke patients, robots require patients to be verticalized. Consequently, this does not allow them to be treated immediately after the trauma. This paper introduces a new robotic platform, named NEUROBike, designed to provide neuro-rehabilitative treatments to bedridden patients. It was designed to provide an early and well-addressed rehabilitation therapy, in terms of kinesiology, efforts, and fatigue, accounting for exercises functionally related to daily motor tasks. For this purpose, kinematic models of leg-joint angular excursions during both walking and sit-to-stand were developed and implemented in control algorithms leading both passive and active exercises. Finally, a set of pilot tests was carried out to evaluate the performance of the robotic platform on healthy subjects.

  6. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    DTIC Science & Technology

    2009-09-01

    implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul

  7. Pick-up, transport and release of a molecular cargo using a small-molecule robotic arm

    NASA Astrophysics Data System (ADS)

    Kassem, Salma; Lee, Alan T. L.; Leigh, David A.; Markevicius, Augustinas; Solà, Jordi

    2016-02-01

    Modern-day factory assembly lines often feature robots that pick up, reposition and connect components in a programmed manner. The idea of manipulating molecular fragments in a similar way has to date only been explored using biological building blocks (specifically DNA). Here, we report on a wholly artificial small-molecule robotic arm capable of selectively transporting a molecular cargo in either direction between two spatially distinct, chemically similar, sites on a molecular platform. The arm picks up/releases a 3-mercaptopropanehydrazide cargo by formation/breakage of a disulfide bond, while dynamic hydrazone chemistry controls the cargo binding to the platform. Transport is controlled by selectively inducing conformational and configurational changes within an embedded hydrazone rotary switch that steers the robotic arm. In a three-stage operation, 79-85% of 3-mercaptopropanehydrazide molecules are transported in either (chosen) direction between the two platform sites, without the cargo at any time fully dissociating from the machine nor exchanging with other molecules in the bulk.

  8. Pick-up, transport and release of a molecular cargo using a small-molecule robotic arm.

    PubMed

    Kassem, Salma; Lee, Alan T L; Leigh, David A; Markevicius, Augustinas; Solà, Jordi

    2016-02-01

    Modern-day factory assembly lines often feature robots that pick up, reposition and connect components in a programmed manner. The idea of manipulating molecular fragments in a similar way has to date only been explored using biological building blocks (specifically DNA). Here, we report on a wholly artificial small-molecule robotic arm capable of selectively transporting a molecular cargo in either direction between two spatially distinct, chemically similar, sites on a molecular platform. The arm picks up/releases a 3-mercaptopropanehydrazide cargo by formation/breakage of a disulfide bond, while dynamic hydrazone chemistry controls the cargo binding to the platform. Transport is controlled by selectively inducing conformational and configurational changes within an embedded hydrazone rotary switch that steers the robotic arm. In a three-stage operation, 79-85% of 3-mercaptopropanehydrazide molecules are transported in either (chosen) direction between the two platform sites, without the cargo at any time fully dissociating from the machine nor exchanging with other molecules in the bulk.

  9. Free-standing leaping experiments with a power-autonomous elastic-spined quadruped

    NASA Astrophysics Data System (ADS)

    Pusey, Jason L.; Duperret, Jeffrey M.; Haynes, G. Clark; Knopf, Ryan; Koditschek, Daniel E.

    2013-05-01

    We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot's "body energy" during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid's initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits.

  10. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    PubMed Central

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782

  11. Golden Gait: An Optimization Theory Perspective on Human and Humanoid Walking

    PubMed Central

    Iosa, Marco; Morone, Giovanni; Paolucci, Stefano

    2017-01-01

    Human walking is a complex task which includes hundreds of muscles, bones and joints working together to deliver harmonic movements with the need of finding equilibrium between moving forward and maintaining stability. Many different computational approaches have been used to explain human walking mechanisms, from pendular model to fractal approaches. A new perspective can be gained from using the principles developed in the field of Optimization theory and in particularly the branch of Game Theory. In particular we provide a new insight into human walking showing as the trade-off between advancement and equilibrium managed during walking has the same solution of the Ultimatum game, one of the most famous paradigms of game theory, and this solution is the golden ratio. The golden ratio is an irrational number that was found in many biological and natural systems self-organized in a harmonic, asymmetric, and fractal structure. Recently, the golden ratio has also been found as the equilibrium point between two players involved into the Ultimatum Game. It has been suggested that this result can be due to the fact that the golden ratio is perceived as the fairest asymmetric solution by the two players. The golden ratio is also the most common proportion between stance and swing phase of human walking. This approach may explain the importance of harmony in human walking, and provide new perspectives for developing quantitative assessment of human walking, efficient humanoid robotic walkers, and effective neurorobots for rehabilitation. PMID:29311890

  12. Current status of validation for robotic surgery simulators - a systematic review.

    PubMed

    Abboudi, Hamid; Khan, Mohammed S; Aboumarzouk, Omar; Guru, Khurshid A; Challacombe, Ben; Dasgupta, Prokar; Ahmed, Kamran

    2013-02-01

    To analyse studies validating the effectiveness of robotic surgery simulators. The MEDLINE(®), EMBASE(®) and PsycINFO(®) databases were systematically searched until September 2011. References from retrieved articles were reviewed to broaden the search. The simulator name, training tasks, participant level, training duration and evaluation scoring were extracted from each study. We also extracted data on feasibility, validity, cost-effectiveness, reliability and educational impact. We identified 19 studies investigating simulation options in robotic surgery. There are five different robotic surgery simulation platforms available on the market. In all, 11 studies sought opinion and compared performance between two different groups; 'expert' and 'novice'. Experts ranged in experience from 21-2200 robotic cases. The novice groups consisted of participants with no prior experience on a robotic platform and were often medical students or junior doctors. The Mimic dV-Trainer(®), ProMIS(®), SimSurgery Educational Platform(®) (SEP) and Intuitive systems have shown face, content and construct validity. The Robotic Surgical SimulatorTM system has only been face and content validated. All of the simulators except SEP have shown educational impact. Feasibility and cost-effectiveness of simulation systems was not evaluated in any trial. Virtual reality simulators were shown to be effective training tools for junior trainees. Simulation training holds the greatest potential to be used as an adjunct to traditional training methods to equip the next generation of robotic surgeons with the skills required to operate safely. However, current simulation models have only been validated in small studies. There is no evidence to suggest one type of simulator provides more effective training than any other. More research is needed to validate simulated environments further and investigate the effectiveness of animal and cadaveric training in robotic surgery. © 2012 BJU International.

  13. Multi-Robot Assembly Strategies and Metrics.

    PubMed

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  14. Multi-Robot Assembly Strategies and Metrics

    PubMed Central

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  15. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  16. Robotic assisted andrological surgery

    PubMed Central

    Parekattil, Sijo J; Gudeloglu, Ahmet

    2013-01-01

    The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637

  17. A new family of omnidirectional and holonomic wheeled platforms for mobile robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Killough, S.M.

    1994-08-01

    This paper presents the concepts for a new family of holonomic wheeled platforms that feature full omnidirectionality with simultaneous and independently controlled rotational and translational motion capabilities. The authors first present the orthogonal-wheels'' concept and the two major wheel assemblies on which these platforms are based. The authors then describe how a combination of these assemblies with appropriate control can be used to generate an omnidirectional capability for mobile robot platforms. Several alternative designs are considered, and their respective characteristics with respect to rotational and translational motion control are discussed. The design and control of a prototype platform developed tomore » test and demonstrate the proposed concepts is then described, and experimental results illustrating the full omnidirectionality of the platforms with decoupled rotational and translational degrees of freedom are presented.« less

  18. The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot.

    PubMed

    Kitson, Philip J; Glatzel, Stefan; Cronin, Leroy

    2016-01-01

    An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic 'programs' which can run on similar low cost, user-constructed robotic platforms towards an 'open-source' regime in the area of chemical synthesis.

  19. The digital code driven autonomous synthesis of ibuprofen automated in a 3D-printer-based robot

    PubMed Central

    Kitson, Philip J; Glatzel, Stefan

    2016-01-01

    An automated synthesis robot was constructed by modifying an open source 3D printing platform. The resulting automated system was used to 3D print reaction vessels (reactionware) of differing internal volumes using polypropylene feedstock via a fused deposition modeling 3D printing approach and subsequently make use of these fabricated vessels to synthesize the nonsteroidal anti-inflammatory drug ibuprofen via a consecutive one-pot three-step approach. The synthesis of ibuprofen could be achieved on different scales simply by adjusting the parameters in the robot control software. The software for controlling the synthesis robot was written in the python programming language and hard-coded for the synthesis of ibuprofen by the method described, opening possibilities for the sharing of validated synthetic ‘programs’ which can run on similar low cost, user-constructed robotic platforms towards an ‘open-source’ regime in the area of chemical synthesis. PMID:28144350

  20. Robots Save Soldiers' Lives Overseas (MarcBot)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Marshall Space Flight Center mobile communications platform designs for future lunar missions led to improvements to fleets of tactical robots now being deployed by U.S. Army. The Multi-function Agile Remote Control Robot (MARCbot) helps soldiers search out and identify improvised explosive devices. NASA used the MARCbots to test its mobile communications platform, and in working with it, made the robot faster while adding capabilities -- upgrading to a digital camera, encrypting the controllers and video transmission, as well as increasing the range and adding communications abilities. They also simplified the design, providing more plug-and-play sensors and replacing some of the complex electronics with more trouble-free, low-cost components. Applied Geo Technology, a tribally-owned corporation in Choctaw, Mississippi, was given the task of manufacturing the modified robots. The company is now producing 40 units per month, 300 of which have already been deployed overseas.

Top