Sample records for autonomous humanoid robot

  1. Autonomous learning in humanoid robotics through mental imagery.

    PubMed

    Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo

    2013-05-01

    In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Working and Learning with Knowledge in the Lobes of a Humanoid's Mind

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David

    2003-01-01

    Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).

  3. Dissociated emergent-response system and fine-processing system in human neural network and a heuristic neural architecture for autonomous humanoid robots.

    PubMed

    Yan, Xiaodan

    2010-01-01

    The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.

  4. Mobile Autonomous Humanoid Assistant

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.

    2004-01-01

    A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  5. Supervising Remote Humanoids Across Intermediate Time Delay

    NASA Technical Reports Server (NTRS)

    Hambuchen, Kimberly; Bluethmann, William; Goza, Michael; Ambrose, Robert; Rabe, Kenneth; Allan, Mark

    2006-01-01

    The President's Vision for Space Exploration, laid out in 2004, relies heavily upon robotic exploration of the lunar surface in early phases of the program. Prior to the arrival of astronauts on the lunar surface, these robots will be required to be controlled across space and time, posing a considerable challenge for traditional telepresence techniques. Because time delays will be measured in seconds, not minutes as is the case for Mars Exploration, uploading the plan for a day seems excessive. An approach for controlling humanoids under intermediate time delay is presented. This approach uses software running within a ground control cockpit to predict an immersed robot supervisor's motions which the remote humanoid autonomously executes. Initial results are presented.

  6. Development of autonomous eating mechanism for biomimetic robots

    NASA Astrophysics Data System (ADS)

    Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung

    2005-12-01

    Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.

  7. An Integrated Framework for Human-Robot Collaborative Manipulation.

    PubMed

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  8. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    PubMed

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Gyeong-Moon Park; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Yoo, Yong-Ho; Park, Gyeong-Moon; Kim, Jong-Hwan; Kim, Deok-Hwa

    2018-06-01

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  9. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  10. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    PubMed

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  11. Development of a neuromorphic control system for a lightweight humanoid robot

    NASA Astrophysics Data System (ADS)

    Folgheraiter, Michele; Keldibek, Amina; Aubakir, Bauyrzhan; Salakchinov, Shyngys; Gini, Giuseppina; Mauro Franchi, Alessio; Bana, Matteo

    2017-03-01

    A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.

  12. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots

    PubMed Central

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people’s social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people’s judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior – (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people’s intentional stance toward the robot was in this case very similar to their stance toward the human. PMID:29184519

  13. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  14. Upper Torso Control for HOAP-2 Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Sandoval, Steven P.

    2005-01-01

    Humanoid robots have similar physical builds and motion patterns as humans. Not only does this provide a suitable operating environment for the humanoid but it also opens up many research doors on how humans function. The overall objective is replacing humans operating in unsafe environments. A first target application is assembly of structures for future lunar-planetary bases. The initial development platform is a Fujitsu HOAP-2 humanoid robot. The goal for the project is to demonstrate the capability of a HOAP-2 to autonomously construct a cubic frame using provided tubes and joints. This task will require the robot to identify several items, pick them up, transport them to the build location, then properly assemble the structure. The ability to grasp and assemble the pieces will require improved motor control and the addition of tactile feedback sensors. In recent years, learning-based control is becoming more and more popular; for implementing this method we will be using the Adaptive Neural Fuzzy Inference System (ANFIS). When using neural networks for control, no complex models of the system must be constructed in advance-only input/output relationships are required to model the system.

  15. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.

  16. Multi-layer robot skin with embedded sensors and muscles

    NASA Astrophysics Data System (ADS)

    Tomar, Ankit; Tadesse, Yonas

    2016-04-01

    Soft artificial skin with embedded sensors and actuators is proposed for a crosscutting study of cognitive science on a facial expressive humanoid platform. This paper focuses on artificial muscles suitable for humanoid robots and prosthetic devices for safe human-robot interactions. Novel composite artificial skin consisting of sensors and twisted polymer actuators is proposed. The artificial skin is conformable to intricate geometries and includes protective layers, sensor layers, and actuation layers. Fluidic channels are included in the elastomeric skin to inject fluids in order to control actuator response time. The skin can be used to develop facially expressive humanoid robots or other soft robots. The humanoid robot can be used by computer scientists and other behavioral science personnel to test various algorithms, and to understand and develop more perfect humanoid robots with facial expression capability. The small-scale humanoid robots can also assist ongoing therapeutic treatment research with autistic children. The multilayer skin can be used for many soft robots enabling them to detect both temperature and pressure, while actuating the entire structure.

  17. Robonaut Mobile Autonomy: Initial Experiments

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.

    2006-01-01

    A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.

  18. Humanoid robot Lola: design and walking control.

    PubMed

    Buschmann, Thomas; Lohmeier, Sebastian; Ulbrich, Heinz

    2009-01-01

    In this paper we present the humanoid robot LOLA, its mechatronic hardware design, simulation and real-time walking control. The goal of the LOLA-project is to build a machine capable of stable, autonomous, fast and human-like walking. LOLA is characterized by a redundant kinematic configuration with 7-DoF legs, an extremely lightweight design, joint actuators with brushless motors and an electronics architecture using decentralized joint control. Special emphasis was put on an improved mass distribution of the legs to achieve good dynamic performance. Trajectory generation and control aim at faster, more flexible and robust walking. Center of mass trajectories are calculated in real-time from footstep locations using quadratic programming and spline collocation methods. Stabilizing control uses hybrid position/force control in task space with an inner joint position control loop. Inertial stabilization is achieved by modifying the contact force trajectories.

  19. Project M: An Assessment of Mission Assumptions

    NASA Technical Reports Server (NTRS)

    Edwards, Alycia

    2010-01-01

    Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft

  20. DARPA Robotics Challenge (DRC) Using Human-Machine Teamwork to Perform Disaster Response with a Humanoid Robot

    DTIC Science & Technology

    2017-02-01

    DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT FLORIDA INSTITUTE FOR HUMAN AND...AND SUBTITLE DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT 5a. CONTRACT NUMBER...Human and Machine Cognition (IHMC) from 2012-2016 through three phases of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge

  1. Modeling and Classifying Six-Dimensional Trajectories for Teleoperation Under a Time Delay

    NASA Technical Reports Server (NTRS)

    SunSpiral, Vytas; Wheeler, Kevin R.; Allan, Mark B.; Martin, Rodney

    2006-01-01

    Within the context of teleoperating the JSC Robonaut humanoid robot under 2-10 second time delays, this paper explores the technical problem of modeling and classifying human motions represented as six-dimensional (position and orientation) trajectories. A dual path research agenda is reviewed which explored both deterministic approaches and stochastic approaches using Hidden Markov Models. Finally, recent results are shown from a new model which represents the fusion of these two research paths. Questions are also raised about the possibility of automatically generating autonomous actions by reusing the same predictive models of human behavior to be the source of autonomous control. This approach changes the role of teleoperation from being a stand-in for autonomy into the first data collection step for developing generative models capable of autonomous control of the robot.

  2. Vocal emotion of humanoid robots: a study from brain mechanism.

    PubMed

    Wang, Youhui; Hu, Xiaohua; Dai, Weihui; Zhou, Jie; Kuo, Taitzong

    2014-01-01

    Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.

  3. Robotic Literacy Learning Companions: Exploring Student Engagement with a Humanoid Robot in an Afterschool Literacy Program

    ERIC Educational Resources Information Center

    Levchak, Sofia

    2016-01-01

    This study was an investigation of the use of a NAO humanoid robot as an effective tool for engaging readers in an afterschool program as well as to find if increasing engagement using a humanoid robot would affect students' reading comprehension when compared to traditional forms of instruction. The targeted population of this study was…

  4. Vocal Emotion of Humanoid Robots: A Study from Brain Mechanism

    PubMed Central

    Wang, Youhui; Hu, Xiaohua; Zhou, Jie; Kuo, Taitzong

    2014-01-01

    Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings. PMID:24587712

  5. Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD.

    PubMed

    Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan

    2015-12-01

    Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and feedback. The system was designed to incorporate both a humanoid robot and a human examiner. We compared child performance within system across these conditions in a sample of preschool children with ASD (n = 8) and a control sample of typically developing children (n = 8). The system was well-tolerated in the sample, children with ASD exhibited greater attention to the robotic system than the human administrator, and for children with ASD imitation performance appeared superior during the robotic interaction.

  6. Robots testing robots: ALAN-Arm, a humanoid arm for the testing of robotic rehabilitation systems.

    PubMed

    Brookes, Jack; Kuznecovs, Maksims; Kanakis, Menelaos; Grigals, Arturs; Narvidas, Mazvydas; Gallagher, Justin; Levesley, Martin

    2017-07-01

    Robotics is increasing in popularity as a method of providing rich, personalized and cost-effective physiotherapy to individuals with some degree of upper limb paralysis, such as those who have suffered a stroke. These robotic rehabilitation systems are often high powered, and exoskeletal systems can attach to the person in a restrictive manner. Therefore, ensuring the mechanical safety of these devices before they come in contact with individuals is a priority. Additionally, rehabilitation systems may use novel sensor systems to measure current arm position. Used to capture and assess patient movements, these first need to be verified for accuracy by an external system. We present the ALAN-Arm, a humanoid robotic arm designed to be used for both accuracy benchmarking and safety testing of robotic rehabilitation systems. The system can be attached to a rehabilitation device and then replay generated or human movement trajectories, as well as autonomously play rehabilitation games or activities. Tests of the ALAN-Arm indicated it could recreate the path of a generated slow movement path with a maximum error of 14.2mm (mean = 5.8mm) and perform cyclic movements up to 0.6Hz with low gain (<1.5dB). Replaying human data trajectories showed the ability to largely preserve human movement characteristics with slightly higher path length and lower normalised jerk.

  7. Design and motion control of bioinspired humanoid robot head from servo motors toward artificial muscles

    NASA Astrophysics Data System (ADS)

    Almubarak, Yara; Tadesse, Yonas

    2017-04-01

    The potential applications of humanoid robots in social environments, motivates researchers to design, and control biomimetic humanoid robots. Generally, people are more interested to interact with robots that have similar attributes and movements to humans. The head is one of most important part of any social robot. Currently, most humanoid heads use electrical motors, pneumatic actuators, and shape memory alloy (SMA) actuators for actuation. Electrical and pneumatic actuators take most of the space and would cause unsmooth motions. SMAs are expensive to use in humanoids. Recently, in many robotic projects, Twisted and Coiled Polymer (TCP) artificial muscles are used as linear actuators which take up little space compared to the motors. In this paper, we will demonstrate the designing process and motion control of a robotic head with TCP muscles. Servo motors and artificial muscles are used for actuating the head motion, which have been controlled by a cost efficient ARM Cortex-M7 based development board. A complete comparison between the two actuators is presented.

  8. The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy

    NASA Astrophysics Data System (ADS)

    Burns, Alec; Tadesse, Yonas

    2014-03-01

    In this paper, a humanoid robot is presented for ultimate use in the rehabilitation of children with mental disorders, such as autism. Creating affordable and efficient humanoids could assist the therapy in psychiatric disability by offering multimodal communication between the humanoid and humans. Yet, the humanoid development needs a seamless integration of artificial muscles, sensors, controllers and structures. We have designed a human-like robot that has 15 DOF, 580 mm tall and 925 mm arm span using a rapid prototyping system. The robot has a human-like appearance and movement. Flexible sensors around the arm and hands for safe human-robot interactions, and a two-wheel mobile platform for maneuverability are incorporated in the design. The robot has facial features for illustrating human-friendly behavior. The mechanical design of the robot and the characterization of the flexible sensors are presented. Comprehensive study on the upper body design, mobile base, actuators selection, electronics, and performance evaluation are included in this paper.

  9. A Course in Simulation and Demonstration of Humanoid Robot Motion

    ERIC Educational Resources Information Center

    Liu, Hsin-Yu; Wang, Wen-June; Wang, Rong-Jyue

    2011-01-01

    An introductory course for humanoid robot motion realization for undergraduate and graduate students is presented in this study. The basic operations of AX-12 motors and the mechanics combination of a 16 degrees-of-freedom (DOF) humanoid robot are presented first. The main concepts of multilink systems, zero moment point (ZMP), and feedback…

  10. Teen Sized Humanoid Robot: Archie

    NASA Astrophysics Data System (ADS)

    Baltes, Jacky; Byagowi, Ahmad; Anderson, John; Kopacek, Peter

    This paper describes our first teen sized humanoid robot Archie. This robot has been developed in conjunction with Prof. Kopacek’s lab from the Technical University of Vienna. Archie uses brushless motors and harmonic gears with a novel approach to position encoding. Based on our previous experience with small humanoid robots, we developed software to create, store, and play back motions as well as control methods which automatically balance the robot using feedback from an internal measurement unit (IMU).

  11. Humanoids in Support of Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier

    2006-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.

  12. Sports Training Support Method by Self-Coaching with Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Toyama, S.; Ikeda, F.; Yasaka, T.

    2016-09-01

    This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.

  13. Tactile Gloves for Autonomous Grasping With the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Martin, T. B.; Ambrose, R. O.; Diftler, M. A.; Platt, R., Jr.; Butzer, M. J.

    2004-01-01

    Tactile data from rugged gloves are providing the foundation for developing autonomous grasping skills for the NASA/DARPA Robonaut, a dexterous humanoid robot. These custom gloves compliment the human like dexterity available in the Robonaut hands. Multiple versions of the gloves are discussed, showing a progression in using advanced materials and construction techniques to enhance sensitivity and overall sensor coverage. The force data provided by the gloves can be used to improve dexterous, tool and power grasping primitives. Experiments with the latest gloves focus on the use of tools, specifically a power drill used to approximate an astronaut's torque tool.

  14. Improving Grasp Skills Using Schema Structured Learning

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Grupen, ROderic A.; Fagg, Andrew H.

    2006-01-01

    Abstract In the control-based approach to robotics, complex behavior is created by sequencing and combining control primitives. While it is desirable for the robot to autonomously learn the correct control sequence, searching through the large number of potential solutions can be time consuming. This paper constrains this search to variations of a generalized solution encoded in a framework known as an action schema. A new algorithm, SCHEMA STRUCTURED LEARNING, is proposed that repeatedly executes variations of the generalized solution in search of instantiations that satisfy action schema objectives. This approach is tested in a grasping task where Dexter, the UMass humanoid robot, learns which reaching and grasping controllers maximize the probability of grasp success.

  15. Humanoids for lunar and planetary surface operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing

    2005-01-01

    This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.

  16. Complete low-cost implementation of a teleoperated control system for a humanoid robot.

    PubMed

    Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena

    2013-01-24

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.

  17. Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot

    PubMed Central

    Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena

    2013-01-01

    Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029

  18. Acquiring neural signals for developing a perception and cognition model

    NASA Astrophysics Data System (ADS)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  19. Humanoid robotics in health care: An exploration of children's and parents' emotional reactions.

    PubMed

    Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan

    2015-07-01

    A new non-pharmacological method of distraction was tested with 57 children during their annual flu vaccination. Given children's growing enthusiasm for technological devices, a humanoid robot was programmed to interact with them while a nurse administered the vaccination. Children smiled more often with the robot, as compared to the control condition, but they did not cry less. Parents indicated that their children held stronger memories for the robot than for the needle, wanted the robot in the future, and felt empowered to cope. We conclude that children and their parents respond positively to a humanoid robot at the bedside. © The Author(s) 2013.

  20. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    DTIC Science & Technology

    2006-01-01

    Brooks, R., Bryson, J., Marjanovic , M., Stein, L. A., & Wessler, M. (1996), Humanoid Soft- ware, Technical report, MIT Arti cial Intelli- gence Lab...8217, Journal of Biomechanics 19, 231{238. Marjanovic , M. (1995), Learning Functional Maps Between Sensorimotor Systems on a Humanoid Robot, Master’s thesis, MIT

  1. Robonaut: a robot designed to work with humans in space

    NASA Technical Reports Server (NTRS)

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  2. Robonaut: a robot designed to work with humans in space.

    PubMed

    Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby

    2003-01-01

    The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.

  3. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  4. The Paradigm of Utilizing Robots in the Teaching Process: A Comparative Study

    ERIC Educational Resources Information Center

    Bacivarov, Ioan C.; Ilian, Virgil L. M.

    2012-01-01

    This paper discusses a comparative study of the effects of using a humanoid robot for introducing students to personal robotics. Even if a humanoid robot is one of the more complicated types of robots, comprehension was not an issue. The study highlighted the importance of using real hardware for teaching such complex subjects as opposed to…

  5. Generate an Optimum Lightweight Legs Structure Design Based on Critical Posture in A-FLoW Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Luthfi, A.; Subhan, K. A.; Eko H, B.; Sanggar, D. R.; Pramadihanto, D.

    2018-04-01

    Lightweight construction and energy efficiency play an important role in humanoid robot development. The application of computer-aided engineering (CAE) in the development process is one of the possibilities to achieve the appropriate reduction of the weight. This paper describes a method to generate an optimum lightweight legs structure design based on critical posture during walking locomotion in A-FLoW Humanoid robot.The criticalposture can be obtained from the highest forces and moments in each joint of the robot body during walking locomotion. From the finite element analysis (FEA) result can be realized leg structure design of A-FLoW humanoid robot with a maximum displacement value of 0.05 mmand weight reduction about 0.598 Kg from the thigh structure and a maximum displacement value of 0,13 mmand weight reduction about 0.57 kg from the shin structure.

  6. SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan

    2015-11-24

    Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.

  7. SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots

    PubMed Central

    Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan

    2015-01-01

    Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051

  8. Balancing Theory and Practical Work in a Humanoid Robotics Course

    ERIC Educational Resources Information Center

    Wolff, Krister; Wahde, Mattias

    2010-01-01

    In this paper, we summarize our experiences from teaching a course in humanoid robotics at Chalmers University of Technology in Goteborg, Sweden. We describe the robotic platform used in the course and we propose the use of a custom-built robot consisting of standard electronic and mechanical components. In our experience, by using standard…

  9. A neural framework for organization and flexible utilization of episodic memory in cumulatively learning baby humanoids.

    PubMed

    Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2014-12-01

    Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one's past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the "now," multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.

  10. Social cognitive neuroscience and humanoid robotics.

    PubMed

    Chaminade, Thierry; Cheng, Gordon

    2009-01-01

    We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.

  11. Foot Placement Modification for a Biped Humanoid Robot with Narrow Feet

    PubMed Central

    Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion. PMID:24592154

  12. Foot placement modification for a biped humanoid robot with narrow feet.

    PubMed

    Hashimoto, Kenji; Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion.

  13. The second me: Seeing the real body during humanoid robot embodiment produces an illusion of bi-location.

    PubMed

    Aymerich-Franch, Laura; Petit, Damien; Ganesh, Gowrishankar; Kheddar, Abderrahmane

    2016-11-01

    Whole-body embodiment studies have shown that synchronized multi-sensory cues can trick a healthy human mind to perceive self-location outside the bodily borders, producing an illusion that resembles an out-of-body experience (OBE). But can a healthy mind also perceive the sense of self in more than one body at the same time? To answer this question, we created a novel artificial reduplication of one's body using a humanoid robot embodiment system. We first enabled individuals to embody the humanoid robot by providing them with audio-visual feedback and control of the robot head movements and walk, and then explored the self-location and self-identification perceived by them when they observed themselves through the embodied robot. Our results reveal that, when individuals are exposed to the humanoid body reduplication, they experience an illusion that strongly resembles heautoscopy, suggesting that a healthy human mind is able to bi-locate in two different bodies simultaneously. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Developing Humanoid Robots for Real-World Environments

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Kuhlman, Michael; Assad, Chris; Keymeulen, Didier

    2008-01-01

    Humanoids are steadily improving in appearance and functionality demonstrated in controlled environments. To address the challenges of operation in the real-world, researchers have proposed the use of brain-inspired architectures for robot control, and the use of robot learning techniques that enable the robot to acquire and tune skills and behaviours. In the first part of the paper we introduce new concepts and results in these two areas. First, we present a cerebellum-inspired model that demonstrated efficiency in the sensory-motor control of anthropomorphic arms, and in gait control of dynamic walkers. Then, we present a set of new ideas related to robot learning, emphasizing the importance of developing teaching techniques that support learning. In the second part of the paper we propose the use in robotics of the iterative and incremental development methodologies, in the context of practical task-oriented applications. These methodologies promise to rapidly reach system-level integration, and to early identify system-level weaknesses to focus on. We apply this methodology in a task targeting the automated assembly of a modular structure using HOAP-2. We confirm this approach led to rapid development of a end-to-end capability, and offered guidance on which technologies to focus on for gradual improvement of a complete functional system. It is believed that providing Grand Challenge type milestones in practical task-oriented applications accelerates development. As a meaningful target in short-mid term we propose the 'IKEA Challenge', aimed at the demonstration of autonomous assembly of various pieces of furniture, from the box, following included written/drawn instructions.

  15. An affordable compact humanoid robot for Autism Spectrum Disorder interventions in children.

    PubMed

    Dickstein-Fischer, Laurie; Alexander, Elizabeth; Yan, Xiaoan; Su, Hao; Harrington, Kevin; Fischer, Gregory S

    2011-01-01

    Autism Spectrum Disorder impacts an ever-increasing number of children. The disorder is marked by social functioning that is characterized by impairment in the use of nonverbal behaviors, failure to develop appropriate peer relationships and lack of social and emotional exchanges. Providing early intervention through the modality of play therapy has been effective in improving behavioral and social outcomes for children with autism. Interacting with humanoid robots that provide simple emotional response and interaction has been shown to improve the communication skills of autistic children. In particular, early intervention and continuous care provide significantly better outcomes. Currently, there are no robots capable of meeting these requirements that are both low-cost and available to families of autistic children for in-home use. This paper proposes the piloting the use of robotics as an improved diagnostic and early intervention tool for autistic children that is affordable, non-threatening, durable, and capable of interacting with an autistic child. This robot has the ability to track the child with its 3 degree of freedom (DOF) eyes and 3-DOF head, open and close its 1-DOF beak and 1-DOF each eyelids, raise its 1-DOF each wings, play sound, and record sound. These attributes will give it the ability to be used for the diagnosis and treatment of autism. As part of this project, the robot and the electronic and control software have been developed, and integrating semi-autonomous interaction, teleoperation from a remote healthcare provider and initiating trials with children in a local clinic are in progress.

  16. Humanoid assessing rehabilitative exercises.

    PubMed

    Simonov, M; Delconte, G

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "New Methodologies for Patients Rehabilitation". The article presents the approach in which the rehabilitative exercise prepared by healthcare professional is encoded as formal knowledge and used by humanoid robot to assist patients without involving other care actors. The main objective is the use of humanoids in rehabilitative care. An example is pulmonary rehabilitation in COPD patients. Another goal is the automated judgment functionality to determine how the rehabilitation exercise matches the pre-programmed correct sequence. We use the Aldebaran Robotics' NAO humanoid to set up artificial cognitive application. Pre-programmed NAO induces elderly patient to undertake humanoid-driven rehabilitation exercise, but needs to evaluate the human actions against the correct template. Patient is observed using NAO's eyes. We use the Microsoft Kinect SDK to extract motion path from the humanoid's recorded video. We compare human- and humanoid-operated process sequences by using the Dynamic Time Warping (DTW) and test the prototype. This artificial cognitive software showcases the use of DTW algorithm to enable humanoids to judge in near real-time about the correctness of rehabilitative exercises performed by patients following the robot's indications. One could enable better sustainable rehabilitative care services in remote residential settings by combining intelligent applications piloting humanoids with the DTW pattern matching algorithm applied at run time to compare humanoid- and human-operated process sequences. In turn, it will lower the need of human care.

  17. Can We Talk to Robots? Ten-Month-Old Infants Expected Interactive Humanoid Robots to Be Talked to by Persons

    ERIC Educational Resources Information Center

    Arita, A.; Hiraki, K.; Kanda, T.; Ishiguro, H.

    2005-01-01

    As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the…

  18. Influence of facial feedback during a cooperative human-robot task in schizophrenia.

    PubMed

    Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane

    2017-11-03

    Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.

  19. Predictive Interfaces for Long-Distance Tele-Operations

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas

    2005-01-01

    We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.

  20. Artificial heart for humanoid robot

    NASA Astrophysics Data System (ADS)

    Potnuru, Akshay; Wu, Lianjun; Tadesse, Yonas

    2014-03-01

    A soft robotic device inspired by the pumping action of a biological heart is presented in this study. Developing artificial heart to a humanoid robot enables us to make a better biomedical device for ultimate use in humans. As technology continues to become more advanced, the methods in which we implement high performance and biomimetic artificial organs is getting nearer each day. In this paper, we present the design and development of a soft artificial heart that can be used in a humanoid robot and simulate the functions of a human heart using shape memory alloy technology. The robotic heart is designed to pump a blood-like fluid to parts of the robot such as the face to simulate someone blushing or when someone is angry by the use of elastomeric substrates and certain features for the transport of fluids.

  1. The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink

    NASA Astrophysics Data System (ADS)

    Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli

    A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.

  2. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  3. Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism

    ERIC Educational Resources Information Center

    Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin

    2007-01-01

    The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot…

  4. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center’s Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of “dull, dirty or dangerous” tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center’s newest development areas can provide crew with low mass exercise capability and also augment an astronaut’s strength while wearing a space suit.This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center’s Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  5. Robotic Technology Efforts at the NASA/Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Diftler, Ron

    2017-01-01

    The NASA/Johnson Space Center has been developing robotic systems in support of space exploration for more than two decades. The goal of the Center's Robotic Systems Technology Branch is to design and build hardware and software to assist astronauts in performing their mission. These systems include: rovers, humanoid robots, inspection devices and wearable robotics. Inspection systems provide external views of space vehicles to search for surface damage and also maneuver inside restricted areas to verify proper connections. New concepts in human and robotic rovers offer solutions for navigating difficult terrain expected in future planetary missions. An important objective for humanoid robots is to relieve the crew of "dull, dirty or dangerous" tasks allowing them more time to perform their important science and exploration missions. Wearable robotics one of the Center's newest development areas can provide crew with low mass exercise capability and also augment an astronaut's strength while wearing a space suit. This presentation will describe the robotic technology and prototypes developed at the Johnson Space Center that are the basis for future flight systems. An overview of inspection robots will show their operation on the ground and in-orbit. Rovers with independent wheel modules, crab steering, and active suspension are able to climb over large obstacles, and nimbly maneuver around others. Humanoid robots, including the First Humanoid Robot in Space: Robonaut 2, demonstrate capabilities that will lead to robotic caretakers for human habitats in space, and on Mars. The Center's Wearable Robotics Lab supports work in assistive and sensing devices, including exoskeletons, force measuring shoes, and grasp assist gloves.

  6. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  7. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  8. Pre-Schoolers' Interest and Caring Behaviour around a Humanoid Robot

    ERIC Educational Resources Information Center

    Ioannou, Andri; Andreou, Emily; Christofi, Maria

    2015-01-01

    This exploratory case study involved a humanoid robot, NAO, and four pre-schoolers. NAO was placed in an indoor playground together with other toys and appeared as a peer who played, talked, danced and said stories. Analysis of video recordings focused on children's behaviour around NAO and how the robot gained children's attention and…

  9. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task

    PubMed Central

    Rogers, Wendy A.

    2015-01-01

    Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936

  10. Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task.

    PubMed

    Prakash, Akanksha; Rogers, Wendy A

    2015-04-01

    Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.

  11. Towards Autonomous Operation of Robonaut 2

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hart, Stephen W.; Yamokoski, J. D.

    2011-01-01

    The Robonaut 2 (R2) platform, as shown in Figure 1, was designed through a collaboration between NASA and General Motors to be a capable robotic assistant with the dexterity similar to a suited astronaut [1]. An R2 robot was sent to the International Space Station (ISS) in February 2011 and, in doing so, became the first humanoid robot in space. Its capabilities are presently being tested and expanded to increase its usefulness to the crew. Current work on R2 includes the addition of a mobility platform to allow the robot to complete tasks (such as cleaning, maintenance, or simple construction activities) both inside and outside of the ISS. To support these new activities, R2's software architecture is being developed to provide efficient ways of programming robust and autonomous behavior. In particular, a multi-tiered software architecture is proposed that combines principles of low-level feedback control with higher-level planners that accomplish behavioral goals at the task level given the run-time context, user constraints, the health of the system, and so on. The proposed architecture is shown in Figure 2. At the lowest-level, the resource level, there exists the various sensory and motor signals available to the system. The sensory signals for a robot such as R2 include multiple channels of force/torque data, joint or Cartesian positions calculated through the robot's proprioception, and signals derived from objects observable by its cameras.

  12. LARM PKM solutions for torso design in humanoid robots

    NASA Astrophysics Data System (ADS)

    Ceccarelli, Marco

    2014-12-01

    Human-like torso features are essential in humanoid robots. In this paper problems for design and operation of solutions for a robotic torso are discussed by referring to experiences and designs that have been developed at Laboratory of Robotics and Mechatronics (LARM) in Cassino, Italy. A new solution is presented with conceptual views as waist-trunk structure that makes a proper partition of the performance for walking and arm operations as sustained by a torso.

  13. Robot-Mediated Interviews - How Effective Is a Humanoid Robot as a Tool for Interviewing Young Children?

    PubMed Central

    Wood, Luke Jai; Dautenhahn, Kerstin; Rainer, Austen; Robins, Ben; Lehmann, Hagen; Syrdal, Dag Sverre

    2013-01-01

    Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications. PMID:23533625

  14. "Robovie, You'll Have to Go into the Closet Now": Children's Social and Moral Relationships with a Humanoid Robot

    ERIC Educational Resources Information Center

    Kahn, Peter H., Jr.; Kanda, Takayuki; Ishiguro, Hiroshi; Freier, Nathan G.; Severson, Rachel L.; Gill, Brian T.; Ruckert, Jolina H.; Shen, Solace

    2012-01-01

    Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter…

  15. Natural Tasking of Robots Based on Human Interaction Cues

    DTIC Science & Technology

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  16. Grounding language in action and perception: From cognitive agents to humanoid robots

    NASA Astrophysics Data System (ADS)

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.

  17. A survey on dielectric elastomer actuators for soft robots.

    PubMed

    Gu, Guo-Ying; Zhu, Jian; Zhu, Li-Min; Zhu, Xiangyang

    2017-01-23

    Conventional industrial robots with the rigid actuation technology have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing number of robots needing to interact with humans and unstructured environments, there is a need for soft robots capable of sustaining large deformation while inducing little pressure or damage when maneuvering through confined spaces. The emergence of soft robotics offers the prospect of applying soft actuators as artificial muscles in robots, replacing traditional rigid actuators. Dielectric elastomer actuators (DEAs) are recognized as one of the most promising soft actuation technologies due to the facts that: i) dielectric elastomers are kind of soft, motion-generating materials that resemble natural muscle of humans in terms of force, strain (displacement per unit length or area) and actuation pressure/density; ii) dielectric elastomers can produce large voltage-induced deformation. In this survey, we first introduce the so-called DEAs emphasizing the key points of working principle, key components and electromechanical modeling approaches. Then, different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed. Lastly, we summarize the challenges and opportunities for the further studies in terms of mechanism design, dynamics modeling and autonomous control.

  18. From First Contact to Close Encounters: A Developmentally Deep Perceptual System for a Humanoid Robot

    DTIC Science & Technology

    2003-06-01

    pages 961-968. Brooks, R. A., Breazeal, C., Marjanovic , M., and Scassellati, B. (1999). The Cog project: Building a humanoid robot. Lecture Notes in...investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38. Marjanovic , M. (1995). Learning functional maps between sensorimotor

  19. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    NASA Astrophysics Data System (ADS)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  20. Control of humanoid robot via motion-onset visual evoked potentials

    PubMed Central

    Li, Wei; Li, Mengfan; Zhao, Jing

    2015-01-01

    This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task. PMID:25620918

  1. Drift-Free Humanoid State Estimation fusing Kinematic, Inertial and LIDAR Sensing

    DTIC Science & Technology

    2014-08-01

    registration to this map and other objects in the robot’s vicinity while also contributing to direct low-level control of a Boston Dynamics Atlas robot ...requirements. I. INTRODUCTION Dynamic locomotion of legged robotic systems remains an open and challenging research problem whose solution will enable...humanoids to perform tasks and reach places inaccessible to wheeled or tracked robots . Several research institutions are developing walking and running

  2. Toward humanoid robots for operations in complex urban environments

    NASA Astrophysics Data System (ADS)

    Pratt, Jerry E.; Neuhaus, Peter; Johnson, Matthew; Carff, John; Krupp, Ben

    2010-04-01

    Many infantry operations in urban environments, such as building clearing, are extremely dangerous and difficult and often result in high casualty rates. Despite the fast pace of technological progress in many other areas, the tactics and technology deployed for many of these dangerous urban operation have not changed much in the last 50 years. While robots have been extremely useful for improvised explosive device (IED) detonation, under-vehicle inspection, surveillance, and cave exploration, there is still no fieldable robot that can operate effectively in cluttered streets and inside buildings. Developing a fieldable robot that can maneuver in complex urban environments is challenging due to narrow corridors, stairs, rubble, doors and cluttered doorways, and other obstacles. Typical wheeled and tracked robots have trouble getting through most of these obstacles. A bipedal humanoid is ideally shaped for many of these obstacles because its legs are long and skinny. Therefore it has the potential to step over large barriers, gaps, rocks, and steps, yet squeeze through narrow passageways, and through narrow doorways. By being able to walk with one foot directly in front of the other, humanoids also have the potential to walk over narrow "balance beam" style objects and can cross a narrow row of stepping stones. We describe some recent advances in humanoid robots, particularly recovery from disturbances, such as pushes and walking over rough terrain. Our disturbance recovery algorithms are based on the concept of Capture Points. An N-Step Capture Point is a point on the ground in which a legged robot can step to in order to stop in N steps. The N-Step Capture Region is the set of all N-Step Capture Points. In order to walk without falling, a legged robot must step somewhere in the intersection between an N-Step Capture Region and the available footholds on the ground. We present results of push recovery using Capture Points on our humanoid robot M2V2.

  3. Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    PubMed Central

    Lazzeri, Nicole; Mazzei, Daniele; Greco, Alberto; Rotesi, Annalisa; Lanatà, Antonio; De Rossi, Danilo Emilio

    2015-01-01

    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models. PMID:26075199

  4. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  5. Grounding language in action and perception: from cognitive agents to humanoid robots.

    PubMed

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.

  6. The Affordance Template ROS Package for Robot Task Programming

    NASA Technical Reports Server (NTRS)

    Hart, Stephen; Dinh, Paul; Hambuchen, Kimberly

    2015-01-01

    This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic XML description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications.

  7. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135163 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  8. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135148 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  9. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135140 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  10. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135185 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  11. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135187 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  12. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135135 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  13. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135157 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  14. Combining gait optimization with passive system to increase the energy efficiency of a humanoid robot walking movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less

  15. Social humanoid robot SARA: development of the wrist mechanism

    NASA Astrophysics Data System (ADS)

    Penčić, M.; Rackov, M.; Čavić, M.; Kiss, I.; Cioată, V. G.

    2018-01-01

    This paper presents the development of a wrist mechanism for humanoid robots. The research was conducted within the project which develops social humanoid robot Sara - a mobile anthropomorphic platform for researching the social behaviour of robots. There are two basic ways for the realization of humanoid wrist. The first one is based on biologically inspired structures that have variable stiffness, and the second one on low backlash mechanisms that have high stiffness. Our solution is low backlash differential mechanism that requires small actuators. Based on the kinematic-dynamic requirements, a dynamic model of the robot wrist is formed. A dynamic simulation for several hand positions was performed and the driving torques of the wrist mechanism were determined. The realized wrist has 2 DOFs and enables movements in the direction of flexion/extension 115°, ulnar/radial deviation ±45° and the combination of these two movements. It consists of a differential mechanism with three spur bevel gears, two of which are driving and identical, while the last one is the driven gear to which the robot hand is attached. Power transmission and motion from the actuator to the input links of the differential mechanism is realized with two parallel placed identical gear mechanisms. The wrist mechanism has high carrying capacity and reliability, high efficiency, a compact design and low backlash that provides high positioning accuracy and repeatability of movements, which is essential for motion control.

  16. Humanoid Robotics: Real-Time Object Oriented Programming

    NASA Technical Reports Server (NTRS)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  17. Robotic system construction with mechatronic components inverted pendulum: humanoid robot

    NASA Astrophysics Data System (ADS)

    Sandru, Lucian Alexandru; Crainic, Marius Florin; Savu, Diana; Moldovan, Cristian; Dolga, Valer; Preitl, Stefan

    2017-03-01

    Mechatronics is a new methodology used to achieve an optimal design of an electromechanical product. This methodology is collection of practices, procedures and rules used by those who work in particular branch of knowledge or discipline. Education in mechatronics at the Polytechnic University Timisoara is organized on three levels: bachelor, master and PhD studies. These activities refer and to design the mechatronics systems. In this context the design, implementation and experimental study of a family of mechatronic demonstrator occupy an important place. In this paper, a variant for a mechatronic demonstrator based on the combination of the electrical and mechanical components is proposed. The demonstrator, named humanoid robot, is equivalent with an inverted pendulum. Is presented the analyze of components for associated functions of the humanoid robot. This type of development the mechatronic systems by the combination of hardware and software, offers the opportunity to build the optimal solutions.

  18. Adaptive neural control for dual-arm coordination of humanoid robot with unknown nonlinearities in output mechanism.

    PubMed

    Liu, Zhi; Chen, Ci; Zhang, Yun; Chen, C L P

    2015-03-01

    To achieve an excellent dual-arm coordination of the humanoid robot, it is essential to deal with the nonlinearities existing in the system dynamics. The literatures so far on the humanoid robot control have a common assumption that the problem of output hysteresis could be ignored. However, in the practical applications, the output hysteresis is widely spread; and its existing limits the motion/force performances of the robotic system. In this paper, an adaptive neural control scheme, which takes the unknown output hysteresis and computational efficiency into account, is presented and investigated. In the controller design, the prior knowledge of system dynamics is assumed to be unknown. The motion error is guaranteed to converge to a small neighborhood of the origin by Lyapunov's stability theory. Simultaneously, the internal force is kept bounded and its error can be made arbitrarily small.

  19. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

    PubMed Central

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-01-01

    Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777

  20. Robonaut 2 and You: Specifying and Executing Complex Operations

    NASA Technical Reports Server (NTRS)

    Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia

    2017-01-01

    Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.

  1. RoboJockey: Designing an Entertainment Experience with Robots.

    PubMed

    Yoshida, Shigeo; Shirokura, Takumi; Sugiura, Yuta; Sakamoto, Daisuke; Ono, Tetsuo; Inami, Masahiko; Igarashi, Takeo

    2016-01-01

    The RoboJockey entertainment system consists of a multitouch tabletop interface for multiuser collaboration. RoboJockey enables a user to choreograph a mobile robot or a humanoid robot by using a simple visual language. With RoboJockey, a user can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements and coordinate the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. RoboJockey was demonstrated to the public during two pilot studies, and the authors observed users' behavior. Here, they report the results of their observations and discuss the RoboJockey entertainment experience.

  2. Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context

    DTIC Science & Technology

    2016-12-01

    Case Study Our approach to coordination was initially motivated and developed in RoboCup soccer games. In fact, it has been first deployed on a team of...features a rather accurate model of the behavior and capabilities of the humanoid robot in the field. In the soccer case study , our goal is to...on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment. I. INTRODUCTION

  3. Electroactive polymer and shape memory alloy actuators in biomimetics and humanoids

    NASA Astrophysics Data System (ADS)

    Tadesse, Yonas

    2013-04-01

    There is a strong need to replicate natural muscles with artificial materials as the structure and function of natural muscle is optimum for articulation. Particularly, the cylindrical shape of natural muscle fiber and its interconnected structure promote the critical investigation of artificial muscles geometry and implementation in the design phase of certain platforms. Biomimetic robots and Humanoid Robot heads with Facial Expressions (HRwFE) are some of the typical platforms that can be used to study the geometrical effects of artificial muscles. It has been shown that electroactive polymer and shape memory alloy artificial muscles and their composites are some of the candidate materials that may replicate natural muscles and showed great promise for biomimetics and humanoid robots. The application of these materials to these systems reveals the challenges and associated technologies that need to be developed in parallel. This paper will focus on the computer aided design (CAD) models of conductive polymer and shape memory alloys in various biomimetic systems and Humanoid Robot with Facial Expressions (HRwFE). The design of these systems will be presented in a comparative manner primarily focusing on three critical parameters: the stress, the strain and the geometry of the artificial muscle.

  4. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.

    PubMed

    Zhao, Jing; Li, Wei; Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.

  5. Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots

    PubMed Central

    Li, Mengfan

    2015-01-01

    In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524

  6. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  7. How Do Young Children Deal with Hybrids of Living and Non-Living Things: The Case of Humanoid Robots

    ERIC Educational Resources Information Center

    Saylor, Megan M.; Somanader, Mark; Levin, Daniel T.; Kawamura, Kazuhiko

    2010-01-01

    In this experiment, we tested children's intuitions about entities that bridge the contrast between living and non-living things. Three- and four-year-olds were asked to attribute a range of properties associated with living things and machines to novel category-defying complex artifacts (humanoid robots), a familiar living thing (a girl), and a…

  8. Can Robotic Interaction Improve Joint Attention Skills?

    PubMed Central

    Zheng, Zhi; Swanson, Amy R.; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A.; Weitlauf, Amy F.; Sarkar, Nilanjan

    2013-01-01

    Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms. PMID:24014194

  9. Can Robotic Interaction Improve Joint Attention Skills?

    PubMed

    Warren, Zachary E; Zheng, Zhi; Swanson, Amy R; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A; Weitlauf, Amy F; Sarkar, Nilanjan

    2015-11-01

    Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms.

  10. Examples of design and achievement of vision systems for mobile robotics applications

    NASA Astrophysics Data System (ADS)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  11. Building Robota, a mini-humanoid robot for the rehabilitation of children with autism.

    PubMed

    Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin

    2007-01-01

    The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot to assess children's imitation ability and to teach children simple coordinated behaviors. In this article, the authors review the recent technological developments that have made the Robota robots suitable for use with children with autism. They critically appraise the main outcomes of two sets of behavioral studies conducted with Robota and discuss how these results inform future development of the Robota robots and robots in general for the rehabilitation of children with complex developmental disabilities.

  12. Robot body self-modeling algorithm: a collision-free motion planning approach for humanoids.

    PubMed

    Leylavi Shoushtari, Ali

    2016-01-01

    Motion planning for humanoid robots is one of the critical issues due to the high redundancy and theoretical and technical considerations e.g. stability, motion feasibility and collision avoidance. The strategies which central nervous system employs to plan, signal and control the human movements are a source of inspiration to deal with the mentioned problems. Self-modeling is a concept inspired by body self-awareness in human. In this research it is integrated in an optimal motion planning framework in order to detect and avoid collision of the manipulated object with the humanoid body during performing a dynamic task. Twelve parametric functions are designed as self-models to determine the boundary of humanoid's body. Later, the boundaries which mathematically defined by the self-models are employed to calculate the safe region for box to avoid the collision with the robot. Four different objective functions are employed in motion simulation to validate the robustness of algorithm under different dynamics. The results also confirm the collision avoidance, reality and stability of the predicted motion.

  13. Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin.

    PubMed

    Roncone, Alessandro; Hoffmann, Matej; Pattacini, Ugo; Fadiga, Luciano; Metta, Giorgio

    2016-01-01

    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.

  14. Walk-Startup of a Two-Legged Walking Mechanism

    NASA Astrophysics Data System (ADS)

    Babković, Kalman; Nagy, László; Krklješ, Damir; Borovac, Branislav

    There is a growing interest towards humanoid robots. One of their most important characteristic is the two-legged motion - walk. Starting and stopping of humanoid robots introduce substantial delays. In this paper, the goal is to explore the possibility of using a short unbalanced state of the biped robot to quickly gain speed and achieve the steady state velocity during a period shorter than half of the single support phase. The proposed method is verified by simulation. Maintainig a steady state, balanced gait is not considered in this paper.

  15. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.

    PubMed

    Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.

  16. Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot

    PubMed Central

    Alexandrov, Alexei V.; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A.; Hettich, Georg; Husek, Dusan

    2017-01-01

    Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary”) movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices. PMID:28487646

  17. Humanoid Robot Control System Balance Dance Indonesia and Reader Filters Using Complementary Angle Values

    NASA Astrophysics Data System (ADS)

    Sholihin; Susanti, Eka

    2018-02-01

    The development of increasingly advanced technology, make people want to be more developed and curiosity to know more to determine the development of advanced technology. Robot is a tool that can be used as a tool for people who have several advantages. Basically humanoid robot is a robot that resembles a human being with all the driving structure. In the application of this humanoid robot manufacture researchers use MPU6050 module which is an important component of the robot because it can provide a response to the angle reference axis X and Y reference axis, the reading corner still has noise if not filtered out beforehand. On the other hand the use of Complementary filters are the answer to reduce the noise. By arranging the filter coefficients and time sampling filter that affects the signal updates corner. The angle value will be the value of the sensor to the process to the PID system which generates output values that are integrated with the servo pulses. Researchers will test to get a reading of the most stable angle for this experiment is the "a" or the value of the filter coefficient = 0.96 and "dt" or the sampling time = 10 ms.

  18. Humanoid Robots: A New Kind of Tool

    DTIC Science & Technology

    2000-01-01

    Breazeal (Ferrell), R. Irie, C. C. Kemp, M. J. Marjanovic , B. Scassellati, M. M. Williamson, Alternate Essences of Intelligence, AAAI 1998. 2 R. A. Brooks, C...Breazeal, M. J. Marjanovic , B. Scassellati, M. M. Williamson, The Cog Project: Building a Humanoid Robot, Computation fbr Metaphors, Analogy and...Functions, Vol. 608, 1990, New York Academy of Sciences, pp. 637-676. 7 M. J. Marjanovic , B. Scassellati, M. M. Williamson, Self-Taught Visually-Guided

  19. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    PubMed Central

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  20. Feasibility of using a humanoid robot to elicit communicational response in children with mild autism

    NASA Astrophysics Data System (ADS)

    Malik, Norjasween Abdul; Shamsuddin, Syamimi; Yussof, Hanafiah; Azfar Miskam, Mohd; Che Hamid, Aminullah

    2013-12-01

    Research evidences are accumulating with regards to the potential use of robots for the rehabilitation of children with autism. The purpose of this paper is to elaborate on the results of communicational response in two children with autism during interaction with the humanoid robot NAO. Both autistic subjects in this study have been diagnosed with mild autism. Following the outcome from our first pilot study; the aim of this current experiment is to explore the application of NAO robot to engage with a child and further teach about emotions through a game-centered and song-based approach. The experiment procedure involved interaction between humanoid robot NAO with each child through a series of four different modules. The observation items are based on ten items selected and referenced to GARS-2 (Gilliam Autism Rating Scale-second edition) and also input from clinicians and therapists. The results clearly indicated that both of the children showed optimistic response through the interaction. Negative responses such as feeling scared or shying away from the robot were not detected. Two-way communication between the child and robot in real time significantly gives positive impact in the responses towards the robot. To conclude, it is feasible to include robot-based interaction specifically to elicit communicational response as a part of the rehabilitation intervention of children with autism.

  1. Humanoids for Lunar and Planetary Surface Operations

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Myers, John; Newton, Jason; Csaszar, Ambrus; Gan, Quan; Hidalgo, Tim; Moore, Jeff; Sandoval, Steven; Xu, Jiajing; Schon, Aaron; hide

    2006-01-01

    Human-like shape makes humanoids well suited for being fostered/taught by humans, and for learning from humans, which we consider the best means to develop cognitive and perceptual/motor skills for truly intelligent, cognitive robots.

  2. Honda humanoid robots development.

    PubMed

    Hirose, Masato; Ogawa, Kenichi

    2007-01-15

    Honda has been doing research on robotics since 1986 with a focus upon bipedal walking technology. The research started with straight and static walking of the first prototype two-legged robot. Now, the continuous transition from walking in a straight line to making a turn has been achieved with the latest humanoid robot ASIMO. ASIMO is the most advanced robot of Honda so far in the mechanism and the control system. ASIMO's configuration allows it to operate freely in the human living space. It could be of practical help to humans with its ability of five-finger arms as well as its walking function. The target of further development of ASIMO is to develop a robot to improve life in human society. Much development work will be continued both mechanically and electronically, staying true to Honda's 'challenging spirit'.

  3. Pilot clinical application of an adaptive robotic system for young children with autism

    PubMed Central

    Bekele, Esubalew; Crittendon, Julie A; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E

    2013-01-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms. PMID:24104517

  4. Sociable Machines: Expressive Social Exchange between Humans and Robots

    DTIC Science & Technology

    2000-05-01

    many occasions about theories on emotion. I’ve cornered Robert Irie’s again and again about auditory processing. I’ve bugged Matto Marjanovic throughout...development at the MIT Artificial Intel- ligence Lab (Brooks, Breazeal, Marjanovic , Scassellati & Williamson 1999). Cog is a general purpose humanoid...RA-2, 253-262. Brooks, R. A., Breazeal, C., Marjanovic , M., Scassellati, B. & Williamson, M. M. (1999), The Cog Project: Building a Humanoid Robot, in

  5. Audio-Visual Perception System for a Humanoid Robotic Head

    PubMed Central

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  6. Event-driven visual attention for the humanoid robot iCub

    PubMed Central

    Rea, Francesco; Metta, Giorgio; Bartolozzi, Chiara

    2013-01-01

    Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. PMID:24379753

  7. Adaptive Language Games with Robots

    NASA Astrophysics Data System (ADS)

    Steels, Luc

    2010-11-01

    This paper surveys recent research into language evolution using computer simulations and robotic experiments. This field has made tremendous progress in the past decade going from simple simulations of lexicon formation with animallike cybernetic robots to sophisticated grammatical experiments with humanoid robots.

  8. Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    PubMed Central

    Roncone, Alessandro; Fadiga, Luciano; Metta, Giorgio

    2016-01-01

    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement. PMID:27711136

  9. Pilot clinical application of an adaptive robotic system for young children with autism.

    PubMed

    Bekele, Esubalew; Crittendon, Julie A; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E

    2014-07-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small group of preschool children with autism spectrum disorders (n = 6) and a control group (n = 6). Children in both groups spent more time looking at the humanoid robot and were able to achieve a high level of accuracy across trials. However, across groups, children required higher levels of prompting to successfully orient within robot-administered trials. The results highlight both the potential benefits of closed-loop adaptive robotic systems as well as current limitations of existing humanoid-robotic platforms. © The Author(s) 2013.

  10. Using a cognitive architecture for general purpose service robot control

    NASA Astrophysics Data System (ADS)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  11. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  12. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  13. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  14. A cortically-inspired model for inverse kinematics computation of a humanoid finger with mechanically coupled joints.

    PubMed

    Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A

    2016-05-19

    The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.

  15. Robust sensorimotor representation to physical interaction changes in humanoid motion learning.

    PubMed

    Shimizu, Toshihiko; Saegusa, Ryo; Ikemoto, Shuhei; Ishiguro, Hiroshi; Metta, Giorgio

    2015-05-01

    This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.

  16. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    PubMed

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.

  17. Recent trends in humanoid robotics research: scientific background, applications, and implications.

    PubMed

    Solis, Jorge; Takanishi, Atsuo

    2010-11-01

    Even though the market size is still small at this moment, applied fields of robots are gradually spreading from the manufacturing industry to the others as one of the important components to support an aging society. For this purpose, the research on human-robot interaction (HRI) has been an emerging topic of interest for both basic research and customer application. The studies are especially focused on behavioral and cognitive aspects of the interaction and the social contexts surrounding it. As a part of these studies, the term of "roboethics" has been introduced as an approach to discuss the potentialities and the limits of robots in relation to human beings. In this article, we describe the recent research trends on the field of humanoid robotics. Their principal applications and their possible impact are discussed.

  18. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  19. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  20. Reaching and Grasping a Glass of Water by Locked-In ALS Patients through a BCI-Controlled Humanoid Robot

    PubMed Central

    Spataro, Rossella; Chella, Antonio; Allison, Brendan; Giardina, Marcello; Sorbello, Rosario; Tramonte, Salvatore; Guger, Christoph; La Bella, Vincenzo

    2017-01-01

    Locked-in Amyotrophic Lateral Sclerosis (ALS) patients are fully dependent on caregivers for any daily need. At this stage, basic communication and environmental control may not be possible even with commonly used augmentative and alternative communication devices. Brain Computer Interface (BCI) technology allows users to modulate brain activity for communication and control of machines and devices, without requiring a motor control. In the last several years, numerous articles have described how persons with ALS could effectively use BCIs for different goals, usually spelling. In the present study, locked-in ALS patients used a BCI system to directly control the humanoid robot NAO (Aldebaran Robotics, France) with the aim of reaching and grasping a glass of water. Four ALS patients and four healthy controls were recruited and trained to operate this humanoid robot through a P300-based BCI. A few minutes training was sufficient to efficiently operate the system in different environments. Three out of the four ALS patients and all controls successfully performed the task with a high level of accuracy. These results suggest that BCI-operated robots can be used by locked-in ALS patients as an artificial alter-ego, the machine being able to move, speak and act in his/her place. PMID:28298888

  1. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  2. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  3. Approach-Phase Precision Landing with Hazard Relative Navigation: Terrestrial Test Campaign Results of the Morpheus/ALHAT Project

    NASA Technical Reports Server (NTRS)

    Crain, Timothy P.; Bishop, Robert H.; Carson, John M., III; Trawny, Nikolas; Hanak, Chad; Sullivan, Jacob; Christian, John; DeMars, Kyle; Campbell, Tom; Getchius, Joel

    2016-01-01

    The Morpheus Project began in late 2009 as an ambitious e ort code-named Project M to integrate three ongoing multi-center NASA technology developments: humanoid robotics, liquid oxygen/liquid methane (LOX/LCH4) propulsion and Autonomous Precision Landing and Hazard Avoidance Technology (ALHAT) into a single engineering demonstration mission to be own to the Moon by 2013. The humanoid robot e ort was redirected to a deploy- ment of Robonaut 2 on the International Space Station in February of 2011 while Morpheus continued as a terrestrial eld test project integrating the existing ALHAT Project's tech- nologies into a sub-orbital ight system using the world's rst LOX/LCH4 main propulsion and reaction control system fed from the same blowdown tanks. A series of 33 tethered tests with the Morpheus 1.0 vehicle and Morpheus 1.5 vehicle were conducted from April 2011 - December 2013 before successful, sustained free ights with the primary Vertical Testbed (VTB) navigation con guration began with Free Flight 3 on December 10, 2013. Over the course of the following 12 free ights and 3 tethered ights, components of the ALHAT navigation system were integrated into the Morpheus vehicle, operations, and ight control loop. The ALHAT navigation system was integrated and run concurrently with the VTB navigation system as a reference and fail-safe option in ight (see touchdown position esti- mate comparisons in Fig. 1). Flight testing completed with Free Flight 15 on December 15, 2014 with a completely autonomous Hazard Detection and Avoidance (HDA), integration of surface relative and Hazard Relative Navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman lter software, and landing within 2 meters of the VTB GPS-based navigation solution at the safe landing site target. This paper describes the Mor- pheus joint VTB/ALHAT navigation architecture, the sensors utilized during the terrestrial ight campaign, issues resolved during testing, and the navigation results from the ight tests.

  4. Inventing Japan's 'robotics culture': the repeated assembly of science, technology, and culture in social robotics.

    PubMed

    Sabanović, Selma

    2014-06-01

    Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.

  5. Method and apparatus for automatic control of a humanoid robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  6. Human brain spots emotion in non humanoid robots

    PubMed Central

    Foucher, Aurélie; Jouvent, Roland; Nadel, Jacqueline

    2011-01-01

    The computation by which our brain elaborates fast responses to emotional expressions is currently an active field of brain studies. Previous studies have focused on stimuli taken from everyday life. Here, we investigated event-related potentials in response to happy vs neutral stimuli of human and non-humanoid robots. At the behavioural level, emotion shortened reaction times similarly for robotic and human stimuli. Early P1 wave was enhanced in response to happy compared to neutral expressions for robotic as well as for human stimuli, suggesting that emotion from robots is encoded as early as human emotion expression. Congruent with their lower faceness properties compared to human stimuli, robots elicited a later and lower N170 component than human stimuli. These findings challenge the claim that robots need to present an anthropomorphic aspect to interact with humans. Taken together, such results suggest that the early brain processing of emotional expressions is not bounded to human-like arrangements embodying emotion. PMID:20194513

  7. GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.

    PubMed

    Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin

    2014-02-01

    We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.

  8. Towards Machine Learning of Motor Skills

    NASA Astrophysics Data System (ADS)

    Peters, Jan; Schaal, Stefan; Schölkopf, Bernhard

    Autonomous robots that can adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

  9. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  10. Embedded diagnostic, prognostic, and health management system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.

  11. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid

    PubMed Central

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot. PMID:26998923

  12. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  13. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    DTIC Science & Technology

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  14. A direct methanol fuel cell system to power a humanoid robot

    NASA Astrophysics Data System (ADS)

    Joh, Han-Ik; Ha, Tae Jung; Hwang, Sang Youp; Kim, Jong-Ho; Chae, Seung-Hoon; Cho, Jae Hyung; Prabhuram, Joghee; Kim, Soo-Kil; Lim, Tae-Hoon; Cho, Baek-Kyu; Oh, Jun-Ho; Moon, Sang Heup; Ha, Heung Yong

    In this study, a direct methanol fuel cell (DMFC) system, which is the first of its kind, has been developed to power a humanoid robot. The DMFC system consists of a stack, a balance of plant (BOP), a power management unit (PMU), and a back-up battery. The stack has 42 unit cells and is able to produce about 400 W at 19.3 V. The robot is 125 cm tall, weighs 56 kg, and consumes 210 W during normal operation. The robot is integrated with the DMFC system that powers the robot in a stable manner for more than 2 h. The power consumption by the robot during various motions is studied, and load sharing between the fuel cell and the back-up battery is also observed. The loss of methanol feed due to crossover and evaporation amounts to 32.0% and the efficiency of the DMFC system in terms of net electric power is 22.0%.

  15. Grounding Action Words in the Sensorimotor Interaction with the World: Experiments with a Simulated iCub Humanoid Robot

    PubMed Central

    Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony

    2010-01-01

    This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503

  16. A Control Framework for Anthropomorphic Biped Walking Based on Stabilizing Feedforward Trajectories.

    PubMed

    Rezazadeh, Siavash; Gregg, Robert D

    2016-10-01

    Although dynamic walking methods have had notable successes in control of bipedal robots in the recent years, still most of the humanoid robots rely on quasi-static Zero Moment Point controllers. This work is an attempt to design a highly stable controller for dynamic walking of a human-like model which can be used both for control of humanoid robots and prosthetic legs. The method is based on using time-based trajectories that can induce a highly stable limit cycle to the bipedal robot. The time-based nature of the controller motivates its use to entrain a model of an amputee walking, which can potentially lead to a better coordination of the interaction between the prosthesis and the human. The simulations demonstrate the stability of the controller and its robustness against external perturbations.

  17. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  18. Robot Rocket Rally

    NASA Image and Video Library

    2014-03-14

    CAPE CANAVERAL, Fla. – Students gather to watch as a DARwin-OP miniature humanoid robot from Virginia Tech Robotics demonstrates its soccer abilities at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett

  19. Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Lee, Jih-Hsien; Chao, Po-Yao; Wang, Chin-Yeh; Chen, Gwo-Dong

    2010-01-01

    As robot technologies develop, many researchers have tried to use robots to support education. Studies have shown that robots can help students develop problem-solving abilities and learn computer programming, mathematics, and science. However, few studies discuss the use of robots to facilitate the teaching of second languages. We discuss whether…

  20. Human-Robot Interaction: Status and Challenges.

    PubMed

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  1. Artificial humanoid for the elderly people.

    PubMed

    Simou, Panagiota; Alexiou, Athanasios; Tiligadis, Konstantinos

    2015-01-01

    While frailty and other multi-scale factors have to be correlated during a geriatric assessment, few prototype robots have already been developed in order to measure and provide real-time information, concerning elderly daily activities. Cognitive impairment and alterations on daily functions should be immediate recognized from caregivers, in order to be prevented and probably treated. In this chapter we recognize the necessity of artificial robots during the personal service of the elderly population, not only as a mobile laboratory-geriatrician, but mainly as a socialized digital humanoid able to develop social behavior and activate memories and emotions.

  2. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    PubMed Central

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  3. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement.

    PubMed

    Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.

  4. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement

    PubMed Central

    Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed

    2014-01-01

    We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554

  5. Robot Rocket Rally

    NASA Image and Video Library

    2014-03-14

    CAPE CANAVERAL, Fla. – A miniature humanoid robot known as DARwin-OP, from Virginia Tech Robotics, plays soccer with a red tennis ball for a crowd of students at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett

  6. Model-based Robotic Dynamic Motion Control for the Robonaut 2 Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Hulse, Aaron M.; Taylor, Ross C.; Curtis, Andrew W.; Gooding, Dustin R.; Thackston, Allison

    2013-01-01

    Robonaut 2 (R2), an upper-body dexterous humanoid robot, has been undergoing experimental trials on board the International Space Station (ISS) for more than a year. R2 will soon be upgraded with two climbing appendages, or legs, as well as a new integrated model-based control system. This control system satisfies two important requirements; first, that the robot can allow humans to enter its workspace during operation and second, that the robot can move its large inertia with enough precision to attach to handrails and seat track while climbing around the ISS. This is achieved by a novel control architecture that features an embedded impedance control law on the motor drivers called Multi-Loop control which is tightly interfaced with a kinematic and dynamic coordinated control system nicknamed RoboDyn that resides on centralized processors. This paper presents the integrated control algorithm as well as several test results that illustrate R2's safety features and performance.

  7. Triggering social interactions: chimpanzees respond to imitation by a humanoid robot and request responses from it.

    PubMed

    Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A

    2014-05-01

    Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.

  8. Information driven self-organization of complex robotic behaviors.

    PubMed

    Martius, Georg; Der, Ralf; Ay, Nihat

    2013-01-01

    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.

  9. Artificial heart for humanoid robot using coiled SMA actuators

    NASA Astrophysics Data System (ADS)

    Potnuru, Akshay; Tadesse, Yonas

    2015-03-01

    Previously, we have presented the design and characterization of artificial heart using cylindrical shape memory alloy (SMA) actuators for humanoids [1]. The robotic heart was primarily designed to pump a blood-like fluid to parts of the robot such as the face to simulate blushing or anger by the use of elastomeric substrates for the transport of fluids. It can also be used for other applications. In this paper, we present an improved design by using high strain coiled SMAs and a novel pumping mechanism that uses sequential actuation to create peristalsis-like motions, and hence pump the fluid. Various placements of actuators will be investigated with respect to the silicone elastomeric body. This new approach provides a better performance in terms of the fluid volume pumped.

  10. Actuator and electronics packaging for extrinsic humanoid hand

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Diftler, Myron A. (Inventor); Reich, David M. (Inventor); Askew, Scott R. (Inventor)

    2013-01-01

    The lower arm assembly for a humanoid robot includes an arm support having a first side and a second side, a plurality of wrist actuators mounted to the first side of the arm support, a plurality of finger actuators mounted to the second side of the arm support and a plurality of electronics also located on the first side of the arm support.

  11. Rotary Series Elastic Actuator

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Mehling, Joshua S. (Inventor); Parsons, Adam H. (Inventor); Griffith, Bryan Kristian (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Davis, Donald R. (Inventor); Ambrose, Robert O. (Inventor); Junkin, Lucien Q. (Inventor)

    2013-01-01

    A rotary actuator assembly is provided for actuation of an upper arm assembly for a dexterous humanoid robot. The upper arm assembly for the humanoid robot includes a plurality of arm support frames each defining an axis. A plurality of rotary actuator assemblies are each mounted to one of the plurality of arm support frames about the respective axes. Each rotary actuator assembly includes a motor mounted about the respective axis, a gear drive rotatably connected to the motor, and a torsion spring. The torsion spring has a spring input that is rotatably connected to an output of the gear drive and a spring output that is connected to an output for the joint.

  12. Rotary series elastic actuator

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Mehling, Joshua S. (Inventor); Parsons, Adam H. (Inventor); Griffith, Bryan Kristian (Inventor); Radford, Nicolaus A. (Inventor); Permenter, Frank Noble (Inventor); Davis, Donald R. (Inventor); Ambrose, Robert O. (Inventor); Junkin, Lucien Q. (Inventor)

    2012-01-01

    A rotary actuator assembly is provided for actuation of an upper arm assembly for a dexterous humanoid robot. The upper arm assembly for the humanoid robot includes a plurality of arm support frames each defining an axis. A plurality of rotary actuator assemblies are each mounted to one of the plurality of arm support frames about the respective axes. Each rotary actuator assembly includes a motor mounted about the respective axis, a gear drive rotatably connected to the motor, and a torsion spring. The torsion spring has a spring input that is rotatably connected to an output of the gear drive and a spring output that is connected to an output for the joint.

  13. Role Transfer for Robot Tasking

    DTIC Science & Technology

    2002-04-01

    Artificial Intel- ligence (AAAI-98). Brooks, R. A., Breazeal, C., Marjanovic , M. & Scassellati, B. (1999). The Cog project: Building a humanoid robot...scale investment in knowledge infrastructure, Communications of the ACM 38(11): 33-38. 33 Marjanovic , M. (1995). Learning functional maps between

  14. Design and development an insect-inspired humanoid gripper that is structurally sound, yet very flexible

    NASA Astrophysics Data System (ADS)

    Hajjaj, S.; Pun, N.

    2013-06-01

    One of the biggest challenges in mechanical robotics design is the balance between structural integrity and flexibility. An industrial robotic gripper could be technically advanced, however it contains only 1 Degree of Freedom (DOF). If one is to add more DOFs the design would become complex. On the other hand, the human wrist and fingers contain 23 DOFs, and is very lightweight and highly flexible. Robotics are becoming more and more part of our social life, they are more and more being incorporated in social, medical, and personal application. Therefore, for such robots to be effective, they need to mimic human performance, both in performance as well as in mechanical design. In this work, a Humanoid Gripper is designed and built to mimic a simplified version of a human wrist and fingers. This is attempted by mimicking insect and human designs of grippes. The main challenge was to insure that the gripper is structurally sound, but at the same time flexible and lightweight. A combination of light weight material and a unique design of finger actuators were applied. The gripper is controlled by a PARALLAX servo controller 28823 (PSCI), which mounted on the assembly itself. At the end, a 6 DOF humanoid gripper made of lightweight material, similar in size to the human arm, and is able to carry a weight of 1 Kg has been designed and built.

  15. HET2 Overview

    NASA Technical Reports Server (NTRS)

    Fong, Terrence W.; Bualat, Maria Gabriele; Diftler, Myron A.

    2015-01-01

    2015 mid-year review charts of the Human Exploration Telerobotics 2 project that describe the Astrobee free-flying robot and the Robonaut 2 humanoid robot. A planned replacement for Synchronized Position Hold, Engage, Reorient, Experimental Satellite (SPHERES), which is currently in use in the International Space Station (ISS).

  16. Challenges in Building Robots that Imitate People

    DTIC Science & Technology

    2000-01-01

    pages 25 40, 1998. R. Brooks, C. Breazeal (Ferrell), R. Irie, C. Kemp, M. Marjanovic , B. Scassellati, & M. Williamson. Alternative essences of...Breazeal (Ferrell), M. Marjanovic , B. Scassellati, and M. Williamson. The Cog project: building a humanoid robot. In C. Nehaniv, editor, Computationjbr

  17. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    ERIC Educational Resources Information Center

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  18. Two-Armed, Mobile, Sensate Research Robot

    NASA Technical Reports Server (NTRS)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and contains a force-and-torque sensor that provides feedback for force (compliance) control of the arm. The end effector could be a tool or a robot hand, depending on the application.

  19. Investigating the ability to read others' intentions using humanoid robots.

    PubMed

    Sciutti, Alessandra; Ansuini, Caterina; Becchio, Cristina; Sandini, Giulio

    2015-01-01

    The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.

  20. Tendon Driven Finger Actuation System

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Reich, David M. (Inventor); Bridgwater, Lyndon (Inventor); Linn, Douglas Martin (Inventor); Askew, Scott R. (Inventor); Diftler, Myron A. (Inventor); Platt, Robert (Inventor); Hargrave, Brian (Inventor); Valvo, Michael C. (Inventor); Abdallah, Muhammad E. (Inventor); hide

    2013-01-01

    A humanoid robot includes a robotic hand having at least one finger. An actuation system for the robotic finger includes an actuator assembly which is supported by the robot and is spaced apart from the finger. A tendon extends from the actuator assembly to the at least one finger and ends in a tendon terminator. The actuator assembly is operable to actuate the tendon to move the tendon terminator and, thus, the finger.

  1. Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

    PubMed

    Li, Mengfan; Li, Wei; Zhou, Huihui

    2016-02-01

    Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.

  2. A pilot study for robot appearance preferences among high-functioning individuals with autism spectrum disorder: Implications for therapeutic use

    PubMed Central

    Warren, Zachary; Muramatsu, Taro; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Miyao, Masutomo; Nakano, Mitsuko; Mizushima, Sakae; Wakita, Yujin; Ishiguro, Hiroshi; Mimura, Masaru; Minabe, Yoshio; Kikuchi, Mitsuru

    2017-01-01

    Recent rapid technological advances have enabled robots to fulfill a variety of human-like functions, leading researchers to propose the use of such technology for the development and subsequent validation of interventions for individuals with autism spectrum disorder (ASD). Although a variety of robots have been proposed as possible therapeutic tools, the physical appearances of humanoid robots currently used in therapy with these patients are highly varied. Very little is known about how these varied designs are experienced by individuals with ASD. In this study, we systematically evaluated preferences regarding robot appearance in a group of 16 individuals with ASD (ages 10–17). Our data suggest that there may be important differences in preference for different types of robots that vary according to interaction type for individuals with ASD. Specifically, within our pilot sample, children with higher-levels of reported ASD symptomatology reported a preference for specific humanoid robots to those perceived as more mechanical or mascot-like. The findings of this pilot study suggest that preferences and reactions to robotic interactions may vary tremendously across individuals with ASD. Future work should evaluate how such differences may be systematically measured and potentially harnessed to facilitate meaningful interactive and intervention paradigms. PMID:29028837

  3. An artificial nociceptor based on a diffusive memristor.

    PubMed

    Yoon, Jung Ho; Wang, Zhongrui; Kim, Kyung Min; Wu, Huaqiang; Ravichandran, Vignesh; Xia, Qiangfei; Hwang, Cheol Seong; Yang, J Joshua

    2018-01-29

    A nociceptor is a critical and special receptor of a sensory neuron that is able to detect noxious stimulus and provide a rapid warning to the central nervous system to start the motor response in the human body and humanoid robotics. It differs from other common sensory receptors with its key features and functions, including the "no adaptation" and "sensitization" phenomena. In this study, we propose and experimentally demonstrate an artificial nociceptor based on a diffusive memristor with critical dynamics for the first time. Using this artificial nociceptor, we further built an artificial sensory alarm system to experimentally demonstrate the feasibility and simplicity of integrating such novel artificial nociceptor devices in artificial intelligence systems, such as humanoid robots.

  4. Humanoid Robot

    NASA Technical Reports Server (NTRS)

    Linn, Douglas M. (Inventor); Mehling, Joshua S. (Inventor); Radford, Nicolaus A. (Inventor); Bridgwater, Lyndon (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Davis, Donald R. (Inventor); Diftler, Myron A. (Inventor); Platt, Robert (Inventor); hide

    2013-01-01

    A humanoid robot includes a torso, a pair of arms, two hands, a neck, and a head. The torso extends along a primary axis and presents a pair of shoulders. The pair of arms movably extend from a respective one of the pair of shoulders. Each of the arms has a plurality of arm joints. The neck movably extends from the torso along the primary axis. The neck has at least one neck joint. The head movably extends from the neck along the primary axis. The head has at least one head joint. The shoulders are canted toward one another at a shrug angle that is defined between each of the shoulders such that a workspace is defined between the shoulders.

  5. A Robotic Therapy Case Study: Developing Joint Attention Skills with a Student on the Autism Spectrum

    ERIC Educational Resources Information Center

    Charron, Nancy; Lewis, Lundy; Craig, Michael

    2017-01-01

    The purpose of this article is to describe a possible methodology for developing joint attention skills in students with autism spectrum disorder. Co-robot therapy with the humanoid robot NAO was used to foster a student's joint attention skill development; 20-min sessions conducted once weekly during the school year were video recorded and…

  6. When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions

    ERIC Educational Resources Information Center

    Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman

    2012-01-01

    In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…

  7. Development of haptic based piezoresistive artificial fingertip: Toward efficient tactile sensing systems for humanoids.

    PubMed

    TermehYousefi, Amin; Azhari, Saman; Khajeh, Amin; Hamidon, Mohd Nizar; Tanaka, Hirofumi

    2017-08-01

    Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load. Copyright © 2017. Published by Elsevier B.V.

  8. Motor contagion during human-human and human-robot interaction.

    PubMed

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  9. Motor Contagion during Human-Human and Human-Robot Interaction

    PubMed Central

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990

  10. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  11. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner.

    PubMed

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.

  12. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner

    PubMed Central

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967

  13. Posture Control-Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses.

    PubMed

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.

  14. Posture Control—Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses

    PubMed Central

    Mergner, Thomas; Lippi, Vittorio

    2018-01-01

    Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428

  15. Development of a novel humanoid-robot simulator for endoscope with pharyngeal reflex and real-life responses.

    PubMed

    Ueki, Masaru; Uehara, Kazutake; Isomoto, Hajime

    2018-05-15

    In recent years, there has been a growing need for skills quantification of endoscopic specialist. Various educational simulators have been created to help increase the endoscopy performance of medical students and trainees. Recent research seems to show that the use of simulators helps increase the skill level of endoscopists, while improving patient safety 1, 2 . However, previous simulators lack sufficient realism and are unable to replicate natural human reactions during endoscopy or quantify endoscopic skills. We developed a novel humanoid-robot simulator (named mikoto ® ) with pharyngeal reflexes and real-life responses to endoscopy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  16. Adaptive training of cortical feature maps for a robot sensorimotor controller.

    PubMed

    Adams, Samantha V; Wennekers, Thomas; Denham, Sue; Culverhouse, Phil F

    2013-08-01

    This work investigates self-organising cortical feature maps (SOFMs) based upon the Kohonen Self-Organising Map (SOM) but implemented with spiking neural networks. In future work, the feature maps are intended as the basis for a sensorimotor controller for an autonomous humanoid robot. Traditional SOM methods require some modifications to be useful for autonomous robotic applications. Ideally the map training process should be self-regulating and not require predefined training files or the usual SOM parameter reduction schedules. It would also be desirable if the organised map had some flexibility to accommodate new information whilst preserving previous learnt patterns. Here methods are described which have been used to develop a cortical motor map training system which goes some way towards addressing these issues. The work is presented under the general term 'Adaptive Plasticity' and the main contribution is the development of a 'plasticity resource' (PR) which is modelled as a global parameter which expresses the rate of map development and is related directly to learning on the afferent (input) connections. The PR is used to control map training in place of a traditional learning rate parameter. In conjunction with the PR, random generation of inputs from a set of exemplar patterns is used rather than predefined datasets and enables maps to be trained without deciding in advance how much data is required. An added benefit of the PR is that, unlike a traditional learning rate, it can increase as well as decrease in response to the demands of the input and so allows the map to accommodate new information when the inputs are changed during training. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Cortex Inspired Model for Inverse Kinematics Computation for a Humanoid Robotic Finger

    PubMed Central

    Gentili, Rodolphe J.; Oh, Hyuk; Molina, Javier; Reggia, James A.; Contreras-Vidal, José L.

    2013-01-01

    In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands. PMID:23366569

  18. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    PubMed

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  19. A Reliability-Based Particle Filter for Humanoid Robot Self-Localization in RoboCup Standard Platform League

    PubMed Central

    Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.

    2013-01-01

    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098

  20. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  1. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  2. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  3. Designing a Robot for Cultural Brokering in Education

    ERIC Educational Resources Information Center

    Kim, Yanghee

    2016-01-01

    The increasing number of English language learning children in U.S. classrooms and the need for effective programs that support these children present a great challenge to the current educational paradigm. The challenge may be met, at least in part, by an innovative humanoid robot serving as a cultural broker that mediates collaborative…

  4. KSC-2012-4343

    NASA Image and Video Library

    2012-08-09

    CAPE CANAVERAL, Fla. – At the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida, the Morpheus prototype lander begins to lift off of the ground during a free-flight test. Testing of the prototype lander had been ongoing at NASA’s Johnson Space Center in Houston in preparation for its first free-flight test at Kennedy Space Center. Morpheus was manufactured and assembled at JSC and Armadillo Aerospace. Morpheus is large enough to carry 1,100 pounds of cargo to the moon – for example, a humanoid robot, a small rover, or a small laboratory to convert moon dust into oxygen. The primary focus of the test is to demonstrate an integrated propulsion and guidance, navigation and control system that can fly a lunar descent profile to exercise the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, safe landing sensors and closed-loop flight control. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA

  5. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface

    PubMed Central

    Kim, Youngmoo E.

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712

  6. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface.

    PubMed

    Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.

  7. Effect of feedback from a socially interactive humanoid robot on reaching kinematics in children with and without cerebral palsy: A pilot study.

    PubMed

    Chen, Yuping; Garcia-Vergara, Sergio; Howard, Ayanna M

    2017-08-17

    To examine whether children with or without cerebral palsy (CP) would follow a humanoid robot's (i.e., Darwin) feedback to move their arm faster when playing virtual reality (VR) games. Seven children with mild CP and 10 able-bodied children participated. Real-time reaching was evaluated by playing the Super Pop VR TM system, including 2-game baseline, 3-game acquisition, and another 2-game extinction. During acquisition, Darwin provided verbal feedback to direct the child to reach a kinematically defined target goal (i.e., 80% of average movement time in baseline). Outcome variables included the percentage of successful reaches ("% successful reaches"), movement time (MT), average speed, path, and number of movement units. All games during acquisition and extinction had larger "%successful reaches," faster speeds, and faster MTs than the 2 games during baseline (p < .05). Children with and without CP could follow the robot's feedback for changing their reaching kinematics when playing VR games.

  8. How Developmental Psychology and Robotics Complement Each Other

    DTIC Science & Technology

    2000-01-01

    Breazeal, Marjanovic , Scassellati & Williamson), and a system for regulating interaction intensities (Breazeal & Scassellati) have also been implemented...have been previously reported (Scassellati; Scas- sellati; Brooks, Breazeal, Marjanovic , Scassellati & Williamson; Marjanovic et al.; Brooks, (Ferrell... Marjanovic , M., Scassel- lati, B. & Williamson, M. M. (1999), The Cog Project: Building a Humanoid Robot, in C. L. Nehaniv, ed., `Computation for

  9. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  10. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  11. State Estimation for Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    21 2.2.1 Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . 21 2.2.2 Planar Five-link Model...Linear Inverted Pendulum Model. LVDT Linear Variable Differential Transformers. MEMS Microelectromechanical Systems. MHE Moving Horizon Estimator. QP...

  12. iss031e148737

    NASA Image and Video Library

    2012-06-27

    ISS031-E-148737 (27 June 2012) --- European Space Agency astronaut Andre Kuipers, Expedition 31 flight engineer, poses for a photo with Robonaut 2 humanoid robot in the Destiny laboratory of the International Space Station.

  13. Reverse control for humanoid robot task recognition.

    PubMed

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  14. Pleasant to the Touch: By Emulating Nature, Scientists Hope to Find Innovative New Uses for Soft Robotics in Health-Care Technology.

    PubMed

    Cianchetti, Matteo; Laschi, Cecilia

    2016-01-01

    Open your Internet browser and search for videos showing the most advanced humanoid robots. Look at how they move and walk. Observe their motion and their interaction with the environment (the ground, users, target objects). Now, search for a video of your favorite sports player. Despite the undoubtedly great achievements of modern robotics, it will become quite evident that a lot of work still remains.

  15. A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict

    ERIC Educational Resources Information Center

    Mazzoni, Elvis; Benvenuti, Martina

    2015-01-01

    This paper presents an exploratory study in which a humanoid robot (MecWilly) acted as a partner to preschool children, helping them to learn English words. In order to use the Socio-Cognitive Conflict paradigm to induce the knowledge acquisition process, we designed a playful activity in which children worked in pairs with another child or with…

  16. Slow walking model for children with multiple disabilities via an application of humanoid robot

    NASA Astrophysics Data System (ADS)

    Wang, ZeFeng; Peyrodie, Laurent; Cao, Hua; Agnani, Olivier; Watelain, Eric; Wang, HaoPing

    2016-02-01

    Walk training research with children having multiple disabilities is presented. Orthosis aid in walking for children with multiple disabilities such as Cerebral Palsy continues to be a clinical and technological challenge. In order to reduce pain and improve treatment strategies, an intermediate structure - humanoid robot NAO - is proposed as an assay platform to study walking training models, to be transferred to future special exoskeletons for children. A suitable and stable walking model is proposed for walk training. It would be simulated and tested on NAO. This comparative study of zero moment point (ZMP) supports polygons and energy consumption validates the model as more stable than the conventional NAO. Accordingly direction variation of the center of mass and the slopes of linear regression knee/ankle angles, the Slow Walk model faithfully emulates the gait pattern of children.

  17. Dexterous Humanoid Robotic Wrist

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Bridgwater, Lyndon (Inventor); Reich, David M. (Inventor); Wampler, II, Charles W. (Inventor); Askew, Scott R. (Inventor); Diftler, Myron A. (Inventor); Nguyen, Vienny (Inventor)

    2013-01-01

    A humanoid robot includes a torso, a pair of arms, a neck, a head, a wrist joint assembly, and a control system. The arms and the neck movably extend from the torso. Each of the arms includes a lower arm and a hand that is rotatable relative to the lower arm. The wrist joint assembly is operatively defined between the lower arm and the hand. The wrist joint assembly includes a yaw axis and a pitch axis. The pitch axis is disposed in a spaced relationship to the yaw axis such that the axes are generally perpendicular. The pitch axis extends between the yaw axis and the lower arm. The hand is rotatable relative to the lower arm about each of the yaw axis and the pitch axis. The control system is configured for determining a yaw angle and a pitch angle of the wrist joint assembly.

  18. Emotion attribution to a non-humanoid robot in different social situations.

    PubMed

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour ("happiness" and "fear"), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.

  19. Emotion Attribution to a Non-Humanoid Robot in Different Social Situations

    PubMed Central

    Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám

    2014-01-01

    In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot. PMID:25551218

  20. Pettit enters data in a laptop computer

    NASA Image and Video Library

    2012-03-13

    ISS030-E-142862 (13 March 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, enters data in a computer while working with Robonaut 2 humanoid robot in the Destiny laboratory of the International Space Station.

  1. Investigating Models of Social Development Using a Humanoid Robot

    DTIC Science & Technology

    1998-01-01

    robot interaction and cooper- and neural models of spinal motor neurons (Williamson ation (Takanishi, Hirano & Sato 1998, Morita, Shibuya 1996...etiology and behavioral manifestations of pervasive de- Individuals with autism tend to have normal sensory velopmental disorders such as autism and...grasp the implications of this information. Wlile interested in joint attention both as an explanation the deficits of autism certainly cover many other

  2. Robonaut 2 - The First Humanoid Robot in Space

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Radford, N. A.; Mehling, J. S.; Abdallah, M. E.; Bridgwater, L. B.; Sanders, A. M.; Askew, R. S.; Linn, D. M.; Yamokoski, J. D.; Permenter, F. A.; hide

    2010-01-01

    NASA and General Motors have developed the second generation Robonaut, Robonaut 2 or R2, and it is scheduled to arrive on the International Space Station in late 2010 and undergo initial testing in early 2011. This state of the art, dexterous, anthropomorphic robotic torso has significant technical improvements over its predecessor making it a far more valuable tool for astronauts. Upgrades include: increased force sensing, greater range of motion, higher bandwidth and improved dexterity. R2 s integrated mechatronics design results in a more compact and robust distributed control system with a faction of the wiring of the original Robonaut. Modularity is prevalent throughout the hardware and software along with innovative and layered approaches for sensing and control. The most important aspects of the Robonaut philosophy are clearly present in this latest model s ability to allow comfortable human interaction and in its design to perform significant work using the same hardware and interfaces used by people. The following describes the mechanisms, integrated electronics, control strategies and user interface that make R2 a promising addition to the Space Station and other environments where humanoid robots can assist people.

  3. Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia

    PubMed Central

    Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel

    2017-01-01

    The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a “control software” for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction. PMID:28713296

  4. Affective and Engagement Issues in the Conception and Assessment of a Robot-Assisted Psychomotor Therapy for Persons with Dementia.

    PubMed

    Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel

    2017-01-01

    The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a "control software" for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction.

  5. Multi-function robots with speech interaction and emotion feedback

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Lou, Guanting; Ma, Mengchao

    2018-03-01

    Nowadays, the service robots have been applied in many public circumstances; however, most of them still don’t have the function of speech interaction, especially the function of speech-emotion interaction feedback. To make the robot more humanoid, Arduino microcontroller was used in this study for the speech recognition module and servo motor control module to achieve the functions of the robot’s speech interaction and emotion feedback. In addition, W5100 was adopted for network connection to achieve information transmission via Internet, providing broad application prospects for the robot in the area of Internet of Things (IoT).

  6. Biomimetic shoulder complex based on 3-PSS/S spherical parallel mechanism

    NASA Astrophysics Data System (ADS)

    Hou, Yulei; Hu, Xinzhe; Zeng, Daxing; Zhou, Yulin

    2015-01-01

    The application of the parallel mechanism is still limited in the humanoid robot fields, and the existing parallel humanoid robot joint has not yet been reflected the characteristics of the parallel mechanism completely, also failed to solve the problem, such as small workspace, effectively. From the structural and functional bionic point of view, a three degrees of freedom(DOFs) spherical parallel mechanism for the shoulder complex of the humanoid robot is presented. According to the structure and kinetic characteristics analysis of the human shoulder complex, 3-PSS/S(P for prismatic pair, S for spherical pair) is chosen as the original configuration for the shouder complex. Using genetic algorithm, the optimization of the 3-PSS/S spherical parallel mechanism is performed, and the orientation workspace of the prototype mechanism is enlarged obviously. Combining the practical structure characteristics of the human shouder complex, an offset output mode, which means the output rod of the mechanism turn to any direction at the point a certain distance from the rotation center of the mechanism, is put forward, which provide possibility for the consistent of the workspace of the mechanism and the actual motion space of the human body shoulder joint. The relationship of the attitude angles between different coordinate system is derived, which establishs the foundation for the motion descriptions under different conditions and control development. The 3-PSS/S spherical parallel mechanism is proposed for the shoulder complex, and the consistence of the workspace of the mechanism and the human shoulder complex is realized by the stuctural parameter optimization and the offset output design.

  7. Concurrent Path Planning with One or More Humanoid Robots

    NASA Technical Reports Server (NTRS)

    Reiland, Matthew J. (Inventor); Sanders, Adam M. (Inventor)

    2014-01-01

    A robotic system includes a controller and one or more robots each having a plurality of robotic joints. Each of the robotic joints is independently controllable to thereby execute a cooperative work task having at least one task execution fork, leading to multiple independent subtasks. The controller coordinates motion of the robot(s) during execution of the cooperative work task. The controller groups the robotic joints into task-specific robotic subsystems, and synchronizes motion of different subsystems during execution of the various subtasks of the cooperative work task. A method for executing the cooperative work task using the robotic system includes automatically grouping the robotic joints into task-specific subsystems, and assigning subtasks of the cooperative work task to the subsystems upon reaching a task execution fork. The method further includes coordinating execution of the subtasks after reaching the task execution fork.

  8. Balance Maintenance in High-Speed Motion of Humanoid Robot Arm-Based on the 6D Constraints of Momentum Change Rate

    PubMed Central

    Zhang, Da-song; Chu, Jian

    2014-01-01

    Based on the 6D constraints of momentum change rate (CMCR), this paper puts forward a real-time and full balance maintenance method for the humanoid robot during high-speed movement of its 7-DOF arm. First, the total momentum formula for the robot's two arms is given and the momentum change rate is defined by the time derivative of the total momentum. The author also illustrates the idea of full balance maintenance and analyzes the physical meaning of 6D CMCR and its fundamental relation to full balance maintenance. Moreover, discretization and optimization solution of CMCR has been provided with the motion constraint of the auxiliary arm's joint, and the solving algorithm is optimized. The simulation results have shown the validity and generality of the proposed method on the full balance maintenance in the 6 DOFs of the robot body under 6D CMCR. This method ensures 6D dynamics balance performance and increases abundant ZMP stability margin. The resulting motion of the auxiliary arm has large abundance in joint space, and the angular velocity and the angular acceleration of these joints lie within the predefined limits. The proposed algorithm also has good real-time performance. PMID:24883404

  9. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots

    PubMed Central

    Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736

  10. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots.

    PubMed

    Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

  11. Acquisition of Basic Behaviors through Teleoperation using Robonaut

    NASA Technical Reports Server (NTRS)

    Campbell, Christina

    2004-01-01

    My area of research is in artificial intelligence and robotics. The major platform of this research is NASA's Robonaut. This humanoid robot is located at the Johnson Space Center. Prior to receiving this grant, I was able to spend two summers in Houston working with the Robonaut team, which is headed by Rob Ambrose. My work centered on teaching Robonaut to grasp a wrench based on data gathered as a human teleoperated the robot. I tried to make the procedure as general as possible so that many different motions could be taught using this method.

  12. Achieving Collaborative Interaction with a Humanoid Robot

    DTIC Science & Technology

    2003-01-01

    gestures will become more prevalent in the kinds of interactions we study. Gesturing is a natural part of human- human communication . It...to human communication . However, in human to human experiments, Tversky et al. observed a similar result and found that speakers took the

  13. Autonomy in Materials Research: A Case Study in Carbon Nanotube Growth (Postprint)

    DTIC Science & Technology

    2016-10-21

    built an Autonomous Research System (ARES)—an autonomous research robot capable of first-of-its-kind closed-loop iterative materials experimentation...ARES exploits advances in autonomous robotics , artificial intelligence, data sciences, and high-throughput and in situ techniques, and is able to...roles of humans and autonomous research robots , and for human-machine partnering. We believe autonomous research robots like ARES constitute a

  14. Developmental Approach for Behavior Learning Using Primitive Motion Skills.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2018-05-01

    Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.

  15. Reducing children's pain and distress towards flu vaccinations: a novel and effective application of humanoid robotics.

    PubMed

    Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan

    2013-06-07

    Millions of children in North America receive an annual flu vaccination, many of whom are at risk of experiencing severe distress. Millions of children also use technologically advanced devices such as computers and cell phones. Based on this familiarity, we introduced another sophisticated device - a humanoid robot - to interact with children during their vaccination. We hypothesized that these children would experience less pain and distress than children who did not have this interaction. This was a randomized controlled study in which 57 children (30 male; age, mean±SD: 6.87±1.34 years) were randomly assigned to a vaccination session with a nurse who used standard administration procedures, or with a robot who was programmed to use cognitive-behavioral strategies with them while a nurse administered the vaccination. Measures of pain and distress were completed by children, parents, nurses, and researchers. Multivariate analyses of variance indicated that interaction with a robot during flu vaccination resulted in significantly less pain and distress in children according to parent, child, nurse, and researcher ratings with effect sizes in the moderate to high range (Cohen's d=0.49-0.90). This is the first study to examine the effectiveness of child-robot interaction for reducing children's pain and distress during a medical procedure. All measures of reduction were significant. These findings suggest that further research on robotics at the bedside is warranted to determine how they can effectively help children manage painful medical procedures. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  16. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study

    PubMed Central

    Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna

    2018-01-01

    Background Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. Objective The aim of this study was to explore participants’ qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. Methods NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot’s head sensor. Evaluations were content-analyzed utilizing Boyatzis’ steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Results Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Conclusions Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. PMID:29724701

  17. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  18. The use of new technologies for nutritional education in primary schools: a pilot study.

    PubMed

    Rosi, A; Dall'Asta, M; Brighenti, F; Del Rio, D; Volta, E; Baroni, I; Nalin, M; Coti Zelati, M; Sanna, A; Scazzina, F

    2016-11-01

    The aim of this study was evaluating if the presence of a humanoid robot could improve the efficacy of a game-based, nutritional education intervention. This was a controlled, school-based pilot intervention carried out on fourth-grade school children (8-10 years old). A total of 112 children underwent a game-based nutritional educational lesson on the importance of carbohydrates. For one group (n = 58), the lesson was carried out by a nutritional educator, the Master of Taste (MT), whereas for another group, (n = 54) the Master of Taste was supported by a humanoid robot (MT + NAO). A third group of children (n = 33) served as control not receiving any lesson. The intervention efficacy was evaluated by questionnaires administered at the beginning and at the end of each intervention. The nutritional knowledge level was evaluated by the cultural-nutritional awareness factor (AF) score. A total of 290 questionnaires were analyzed. Both MT and MT + NAO interventions significantly increased nutritional knowledge. At the end of the study, children in the MT and MT + NAO group showed similar AF scores, and the AF scores of both intervention groups were significantly higher than the AF score of the control group. This study showed a significant increase in the nutritional knowledge of children involved in a game-based, single-lesson, educational intervention performed by a figure that has a background in food science. However, the presence of a humanoid robot to support this figure's teaching activity did not result in any significant learning improvement. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  19. Comparison of Human and Humanoid Robot Control of Upright Stance

    PubMed Central

    Peterka, Robert J.

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to ~1 Hz) dynamic characteristics of human stance control. These subsystems are 1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions, and 2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions were humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different. PMID:19665564

  20. Comparison of human and humanoid robot control of upright stance.

    PubMed

    Peterka, Robert J

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.

  1. Keep focussing: striatal dopamine multiple functions resolved in a single mechanism tested in a simulated humanoid robot

    PubMed Central

    Fiore, Vincenzo G.; Sperati, Valerio; Mannella, Francesco; Mirolli, Marco; Gurney, Kevin; Friston, Karl; Dolan, Raymond J.; Baldassarre, Gianluca

    2014-01-01

    The effects of striatal dopamine (DA) on behavior have been widely investigated over the past decades, with “phasic” burst firings considered as the key expression of a reward prediction error responsible for reinforcement learning. Less well studied is “tonic” DA, where putative functions include the idea that it is a regulator of vigor, incentive salience, disposition to exert an effort and a modulator of approach strategies. We present a model combining tonic and phasic DA to show how different outflows triggered by either intrinsically or extrinsically motivating stimuli dynamically affect the basal ganglia by impacting on a selection process this system performs on its cortical input. The model, which has been tested on the simulated humanoid robot iCub interacting with a mechatronic board, shows the putative functions ascribed to DA emerging from the combination of a standard computational mechanism coupled to a differential sensitivity to the presence of DA across the striatum. PMID:24600422

  2. Improving Cognitive Skills of the Industrial Robot

    NASA Astrophysics Data System (ADS)

    Bezák, Pavol

    2015-08-01

    At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.

  3. Small-Group Technology-Assisted Instruction: Virtual Teacher and Robot Peer for Individuals with Autism Spectrum Disorder.

    PubMed

    Saadatzi, Mohammad Nasser; Pennington, Robert C; Welch, Karla C; Graham, James H

    2018-06-20

    The authors combined virtual reality technology and social robotics to develop a tutoring system that resembled a small-group arrangement. This tutoring system featured a virtual teacher instructing sight words, and included a humanoid robot emulating a peer. The authors used a multiple-probe design across word sets to evaluate the effects of the instructional package on the explicit acquisition and vicarious learning of sight words instructed to three children with autism spectrum disorder (ASD) and the robot peer. Results indicated that participants acquired, maintained, and generalized 100% of the words explicitly instructed to them, made fewer errors while learning the words common between them and the robot peer, and vicariously learned 94% of the words solely instructed to the robot.

  4. Cooperative Autonomous Robots for Reconnaissance

    DTIC Science & Technology

    2009-03-06

    REPORT Cooperative Autonomous Robots for Reconnaissance 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Collaborating mobile robots equipped with WiFi ...Cooperative Autonomous Robots for Reconnaissance Report Title ABSTRACT Collaborating mobile robots equipped with WiFi transceivers are configured as a mobile...equipped with WiFi transceivers are configured as a mobile ad-hoc network. Algorithms are developed to take advantage of the distributed processing

  5. FE Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013708 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  6. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013710 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  7. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013714 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  8. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013712 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  9. KSC-2010-4382

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, a robotics engineer animates the dexterous humanoid astronaut helper, Robonaut (R2) for the participants at a media event hosted by NASA. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  10. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031125 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  11. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031124 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  12. Robonaut 2 in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-02

    ISS034-E-013990 (2 Jan. 2013) --- In the International Space Station’s Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  13. Generalisation, decision making, and embodiment effects in mental rotation: A neurorobotic architecture tested with a humanoid robot.

    PubMed

    Seepanomwan, Kristsana; Caligiore, Daniele; Cangelosi, Angelo; Baldassarre, Gianluca

    2015-12-01

    Mental rotation, a classic experimental paradigm of cognitive psychology, tests the capacity of humans to mentally rotate a seen object to decide if it matches a target object. In recent years, mental rotation has been investigated with brain imaging techniques to identify the brain areas involved. Mental rotation has also been investigated through the development of neural-network models, used to identify the specific mechanisms that underlie its process, and with neurorobotics models to investigate its embodied nature. Current models, however, have limited capacities to relate to neuro-scientific evidence, to generalise mental rotation to new objects, to suitably represent decision making mechanisms, and to allow the study of the effects of overt gestures on mental rotation. The work presented in this study overcomes these limitations by proposing a novel neurorobotic model that has a macro-architecture constrained by knowledge held on brain, encompasses a rather general mental rotation mechanism, and incorporates a biologically plausible decision making mechanism. The model was tested using the humanoid robot iCub in tasks requiring the robot to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot gained an enhanced capacity to generalise mental rotation to new objects and to express the possible effects of overt movements of the wrist on mental rotation. The model also represents a further step in the identification of the embodied neural mechanisms that may underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study.

    PubMed

    Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna; Andrade, Jackie

    2018-05-03

    Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. The aim of this study was to explore participants' qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot's head sensor. Evaluations were content-analyzed utilizing Boyatzis' steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. ©Joana Galvão Gomes da Silva, David J Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, Jackie Andrade. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.05.2018.

  15. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot.

    PubMed

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking.

  16. Human-Derived Disturbance Estimation and Compensation (DEC) Method Lends Itself to a Modular Sensorimotor Control in a Humanoid Robot

    PubMed Central

    Lippi, Vittorio; Mergner, Thomas

    2017-01-01

    The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking. PMID:28951719

  17. Interactive language learning by robots: the transition from babbling to word forms.

    PubMed

    Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe

    2012-01-01

    The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.

  18. Design and control of five fingered under-actuated robotic hand

    NASA Astrophysics Data System (ADS)

    Sahoo, Biswojit; Parida, Pramod Kumar

    2018-04-01

    Now a day's research regarding humanoid robots and its application in different fields (industry, household, rehabilitation and exploratory) is going on entire the globe. Among which a challenging topic is to design a dexterous robotic hand which not only can perform as a hand of a robot but also can be used in re habilitation. The basic key concern is a dexterous robot hand which can be able to mimic the function of biological hand to perform different operations. This thesis work is regarding design and control of a under-actuated robotic hand consisting of four under actuated fingers (index finger, middle finger, little finger and ring finger ) , a thumb and a dexterous palm which can copy the motions and grasp type of human hand which having 21degrees of freedom instead of 25Degree Of Freedom.

  19. Autonomous robot software development using simple software components

    NASA Astrophysics Data System (ADS)

    Burke, Thomas M.; Chung, Chan-Jin

    2004-10-01

    Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.

  20. What makes a robot 'social'?

    PubMed

    Jones, Raya A

    2017-08-01

    Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.

  1. Long-Term Simultaneous Localization and Mapping in Dynamic Environments

    DTIC Science & Technology

    2015-01-01

    core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors

  2. [Mobile autonomous robots-Possibilities and limits].

    PubMed

    Maehle, E; Brockmann, W; Walthelm, A

    2002-02-01

    Besides industrial robots, which today are firmly established in production processes, service robots are becoming more and more important. They shall provide services for humans in different areas of their professional and everyday environment including medicine. Most of these service robots are mobile which requires an intelligent autonomous behaviour. After characterising the different kinds of robots the relevant paradigms of intelligent autonomous behaviour for mobile robots are critically discussed in this paper and illustrated by three concrete examples of robots realized in Lübeck. In addition a short survey of actual kinds of surgical robots as well as an outlook to future developments is given.

  3. Utilization of the NASA Robonaut as a Surgical Avatar in Telemedicine

    NASA Technical Reports Server (NTRS)

    Dean, Marc; Diftler, Myron

    2015-01-01

    The concept of teleoperated robotic surgery is not new; however, most of the work to date has utilized specialized robots designed for specific set of surgeries. This activity explores the use of a humanoid robot to perform surgical procedures using the same hand held instruments that a human surgeon employs. For this effort, the tele-operated Robonaut (R2) was selected due to its dexterity, its ability to perform a wide range of tasks, and its adaptability to changing environments. To evaluate this concept, a series of challenges was designed with the goal of assessing the feasibility of utilizing Robonaut as a telemedicine based surgical avatar.

  4. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    PubMed

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer

    PubMed Central

    Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan

    2012-01-01

    Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853

  6. Empowering Student Voice through Interactive Design and Digital Making

    ERIC Educational Resources Information Center

    Kim, Yanghee; Searle, Kristin

    2017-01-01

    Over the last two decades online technology and digital media have provided space for students to participate and express their voices. This paper further explores how new digital technologies, such as humanoid robots and wearable electronics, can be used to offer additional spaces where students' voices are heard. In these spaces, young students…

  7. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  8. Control of autonomous robot using neural networks

    NASA Astrophysics Data System (ADS)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  9. Interacting With Robots to Investigate the Bases of Social Interaction.

    PubMed

    Sciutti, Alessandra; Sandini, Giulio

    2017-12-01

    Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.

  10. Development and Deployment of Robonaut 2 to the International Space Station

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.

    2011-01-01

    The development of the Robonaut 2 (R2) system was a joint endeavor with NASA and General Motors, producing robots strong enough to do work, yet safe enough to be trusted to work near humans. To date two R2 units have been produced, designated as R2A and R2B. This follows more than a decade of work on the Robonaut 1 units that produced advances in dexterity, tele-presence, remote supervision across time delay, combining mobility with manipulation, human-robot interaction, force control and autonomous grasping. Design challenges for the R2 included higher speed, smaller packaging, more dexterous fingers, more sensitive perception, soft drivetrain design, and the overall implementation of a system software approach for human safety, At the time of this writing the R2B unit was poised for launch to the International Space Station (ISS) aboard STS-133. R2 will be the first humanoid robot in space, and is arguably the most sophisticated robot in the world, bringing NASA into the 21st century as the world's leader in this field. Joining the other robots already on ISS, the station is now an exciting lab for robot experiments and utilization. A particular challenge for this project has been the design and certification of the robot and its software for work near humans. The 3 layer software systems will be described, and the path to ISS certification will be reviewed. R2 will go through a series of ISS checkout tests during 2011. A taskboard was shipped with the robot that will be used to compare R2B's dexterous manipulation in zero gravity with the ground robot s ability to handle similar objects in Earth s gravity. R2's taskboard has panels with increasingly difficult tasks, starting with switches, progressing to connectors and eventually handling softgoods. The taskboard is modular, and new interfaces and experiments will be built up using equipment already on ISS. Since the objective is to test R2 performing tasks with human interfaces, hardware abounds on ISS and the crew will be involved to help select tasks that are dull, dirty or dangerous. Future plans for R2 include a series of upgrades, evolving from static IVA (Intravehicular Activity) operations, to mobile IVA, then EVA (Extravehicular Activity).

  11. DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS

    EPA Science Inventory

    This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...

  12. Measuring empathy for human and robot hand pain using electroencephalography.

    PubMed

    Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru

    2015-11-03

    This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.

  13. JOMAR: Joint Operations with Mobile Autonomous Robots

    DTIC Science & Technology

    2015-12-21

    AFRL-AFOSR-JP-TR-2015-0009 JOMAR: Joint Operations with Mobile Autonomous Robots Edwin Olson UNIVERSITY OF MICHIGAN Final Report 12/21/2015...SUBTITLE JOMAR: Joint Operations with Mobile Autonomous Robots 5a. CONTRACT NUMBER FA23861114024 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT Under this grant, we formulated and implemented a variety of novel algorithms that address core problems in multi- robot systems. These

  14. Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps

    PubMed Central

    Bowen, Chris; Ye, Gu; Alterovitz, Ron

    2015-01-01

    In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642

  15. Application of ultrasonic sensor for measuring distances in robotics

    NASA Astrophysics Data System (ADS)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  16. We're in This Together: Intentional Design of Social Relationships with AIED Systems

    ERIC Educational Resources Information Center

    Walker, Erin; Ogan, Amy

    2016-01-01

    Students' relationships with their peers, teachers, and communities influence the ways in which they approach learning activities and the degree to which they benefit from them. Learning technologies, ranging from humanoid robots to text-based prompts on a computer screen, have a similar social influence on students. We envision a future in which…

  17. Pedagogical and Technological Augmentation of Mobile Learning for Young Children Interactive Learning Environments

    ERIC Educational Resources Information Center

    Kim, Yanghee; Smith, Diantha

    2017-01-01

    The ubiquity and educational potential of mobile applications are well acknowledged. This paper proposes six theory-based, pedagogical strategies to guide interaction design of mobile apps for young children. Also, to augment the capabilities of mobile devices, we used a humanoid robot integrated with a smartphone and developed an English-learning…

  18. Using a Humanoid Robot to Develop a Dialogue-Based Interactive Learning Environment for Elementary Foreign Language Classrooms

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Chen, Gwo-Dong

    2010-01-01

    Elementary school is the critical stage during which the development of listening comprehension and oral abilities in language acquisition occur, especially with a foreign language. However, the current foreign language instructors often adopt one-way teaching, and the learning environment lacks any interactive instructional media with which to…

  19. Curiosity driven reinforcement learning for motion planning on humanoids

    PubMed Central

    Frank, Mikhail; Leitner, Jürgen; Stollenga, Marijn; Förster, Alexander; Schmidhuber, Jürgen

    2014-01-01

    Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment. PMID:24432001

  20. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    A judge for the NASA-WPI Sample Return Robot Centennial Challenge follows a robot on the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  1. Creating the brain and interacting with the brain: an integrated approach to understanding the brain.

    PubMed

    Morimoto, Jun; Kawato, Mitsuo

    2015-03-06

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. Creating the brain and interacting with the brain: an integrated approach to understanding the brain

    PubMed Central

    Morimoto, Jun; Kawato, Mitsuo

    2015-01-01

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568

  3. A learning-based semi-autonomous controller for robotic exploration of unknown disaster scenes while searching for victims.

    PubMed

    Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie

    2014-12-01

    Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.

  4. Mobile Robot Designed with Autonomous Navigation System

    NASA Astrophysics Data System (ADS)

    An, Feng; Chen, Qiang; Zha, Yanfang; Tao, Wenyin

    2017-10-01

    With the rapid development of robot technology, robots appear more and more in all aspects of life and social production, people also ask more requirements for the robot, one is that robot capable of autonomous navigation, can recognize the road. Take the common household sweeping robot as an example, which could avoid obstacles, clean the ground and automatically find the charging place; Another example is AGV tracking car, which can following the route and reach the destination successfully. This paper introduces a new type of robot navigation scheme: SLAM, which can build the environment map in a totally strange environment, and at the same time, locate its own position, so as to achieve autonomous navigation function.

  5. Workshop on Critical ORI Issues Held in Bordeaux, France on OCtober 27 - 29, 1992. Program and Abstracts.

    DTIC Science & Technology

    1992-10-29

    These people try to make their robotic vehicle as intelligent and autonomous as possible with the current state of technology. The robot only interacts... Robotics Peter J. Burt David Sarnoff Research Center Princeton, NJ 08543-5300 U.S.A. The ability of an operator to drive a remotely piloted vehicle depends...RESUPPLY - System which can rapidly and autonomously load and unload palletized ammunition. (18) AUTONOMOUS COMBAT EVACUATION VEHICLE - Robotic arms

  6. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    DTIC Science & Technology

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and

  7. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  8. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  9. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    PubMed

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  10. Tele-assistance for semi-autonomous robots

    NASA Technical Reports Server (NTRS)

    Rogers, Erika; Murphy, Robin R.

    1994-01-01

    This paper describes a new approach in semi-autonomous mobile robots. In this approach the robot has sufficient computerized intelligence to function autonomously under a certain set of conditions, while the local system is a cooperative decision making unit that combines human and machine intelligence. Communication is then allowed to take place in a common mode and in a common language. A number of exception-handling scenarios that were constructed as a result of experiments with actual sensor data collected from two mobile robots were presented.

  11. Mamdani Fuzzy System for Indoor Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.

    2011-06-01

    Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.

  12. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    University of Waterloo (Canada) Robotics Team members test their robot on the practice field one day prior to the NASA-WPI Sample Return Robot Centennial Challenge, Friday, June 15, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  13. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-14

    A University of Waterloo Robotics Team member tests their robot on the practice field two days prior to the NASA-WPI Sample Return Robot Centennial Challenge, Thursday, June 14, 2012 at the Worcester Polytechnic Institute in Worcester, Mass. Teams will compete for a $1.5 million NASA prize to build an autonomous robot that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  14. Autonomous surgical robotics using 3-D ultrasound guidance: feasibility study.

    PubMed

    Whitman, John; Fronheiser, Matthew P; Ivancevich, Nikolas M; Smith, Stephen W

    2007-10-01

    The goal of this study was to test the feasibility of using a real-time 3D (RT3D) ultrasound scanner with a transthoracic matrix array transducer probe to guide an autonomous surgical robot. Employing a fiducial alignment mark on the transducer to orient the robot's frame of reference and using simple thresholding algorithms to segment the 3D images, we tested the accuracy of using the scanner to automatically direct a robot arm that touched two needle tips together within a water tank. RMS measurement error was 3.8% or 1.58 mm for an average path length of 41 mm. Using these same techniques, the autonomous robot also performed simulated needle biopsies of a cyst-like lesion in a tissue phantom. This feasibility study shows the potential for 3D ultrasound guidance of an autonomous surgical robot for simple interventional tasks, including lesion biopsy and foreign body removal.

  15. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012573 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  16. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012571 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  17. Six-and-a-Half-Month-Old Children Positively Attribute Goals to Human Action and to Humanoid-Robot Motion

    ERIC Educational Resources Information Center

    Kamewari, K.; Kato, M.; Kanda, T.; Ishiguro, H.; Hiraki, K.

    2005-01-01

    Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation…

  18. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Intrepid Systems robot, foreground, and the University of Waterloo (Canada) robot, take to the practice field on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Robot teams will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  19. Robonaut: A Robotic Astronaut Assistant

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Diftler, Myron A.

    2001-01-01

    NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.

  20. Open source hardware and software platform for robotics and artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  1. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver, left, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  2. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver, right, listens as Worcester Polytechnic Institute (WPI) Robotics Resource Center Director and NASA-WPI Sample Return Robot Centennial Challenge Judge Ken Stafford points out how the robots navigate the playing field during the challenge on Saturday, June 16, 2012 in Worcester, Mass. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  3. Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling Autonomous Robots

    DTIC Science & Technology

    2009-09-01

    To Appear in IEEE Robotics and Automation Magazine PREPRINT 1 Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling...Introduction We present a strategy for controlling autonomous robots that is based on principles of neuromodulation in the mammalian brain...object, ignore irrelevant distractions, and respond quickly and appropriately to the event [1]. There are separate neuromodulators that alter responses to

  4. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Intrepid Systems robot "MXR - Mark's Exploration Robot" takes to the practice field and tries to capture the white object in the foreground on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Intrepid Systems' robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  5. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Children visiting the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event try to catch basketballs being thrown by a robot from FIRST Robotics at Burncoat High School (Mass.) on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  6. Spatial abstraction for autonomous robot navigation.

    PubMed

    Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon

    2015-09-01

    Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.

  7. Towards Principled Experimental Study of Autonomous Mobile Robots

    NASA Technical Reports Server (NTRS)

    Gat, Erann

    1995-01-01

    We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.

  8. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    "Harry" a Goldendoodle is seen wearing a NASA backpack during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  9. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Team members of "Survey" drive their robot around the campus on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The Survey team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  10. Infant and Adult Perceptions of Possible and Impossible Body Movements: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Morita, Tomoyo; Slaughter, Virginia; Katayama, Nobuko; Kitazaki, Michiteru; Kakigi, Ryusuke; Itakura, Shoji

    2012-01-01

    This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N = 14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old…

  11. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  12. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    PubMed

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  14. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Wunderkammer Laboratory Team leader Jim Rothrock, left, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "Cerberus" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Rothrock's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  15. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  16. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Intrepid Systems Team member Mark Curry, left, talks with NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck, right, about his robot named "MXR - Mark's Exploration Robot" on Saturday, June 16, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team was one of the final teams participating in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams were challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  17. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    Intrepid Systems Team member Mark Curry, right, answers questions from 8th grade Sullivan Middle School (Mass.) students about his robot named "MXR - Mark's Exploration Robot" on Friday, June 15, 2012, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Curry's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  18. Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.

    PubMed

    Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji

    2018-06-01

    Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.

  19. Acquisition of Robotic Giant-swing Motion Using Reinforcement Learning and Its Consideration of Motion Forms

    NASA Astrophysics Data System (ADS)

    Sakai, Naoki; Kawabe, Naoto; Hara, Masayuki; Toyoda, Nozomi; Yabuta, Tetsuro

    This paper argues how a compact humanoid robot can acquire a giant-swing motion without any robotic models by using Q-Learning method. Generally, it is widely said that Q-Learning is not appropriated for learning dynamic motions because Markov property is not necessarily guaranteed during the dynamic task. However, we tried to solve this problem by embedding the angular velocity state into state definition and averaging Q-Learning method to reduce dynamic effects, although there remain non-Markov effects in the learning results. The result shows how the robot can acquire a giant-swing motion by using Q-Learning algorithm. The successful acquired motions are analyzed in the view point of dynamics in order to realize a functionally giant-swing motion. Finally, the result shows how this method can avoid the stagnant action loop at around the bottom of the horizontal bar during the early stage of giant-swing motion.

  20. Archaic man meets a marvellous automaton: posthumanism, social robots, archetypes.

    PubMed

    Jones, Raya

    2017-06-01

    Posthumanism is associated with critical explorations of how new technologies are rewriting our understanding of what it means to be human and how they might alter human existence itself. Intersections with analytical psychology vary depending on which technologies are held in focus. Social robotics promises to populate everyday settings with entities that have populated the imagination for millennia. A legend of A Marvellous Automaton appears as early as 350 B.C. in a book of Taoist teachings, and is joined by ancient and medieval legends of manmade humanoids coming to life, as well as the familiar robots of modern science fiction. However, while the robotics industry seems to be realizing an archetypal fantasy, the technology creates new social realities that generate distinctive issues of potential relevance for the theory and practice of analytical psychology. © 2017, The Society of Analytical Psychology.

  1. Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.

    2001-01-01

    The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.

  2. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  3. Women Warriors: Why the Robotics Revolution Changes the Combat Equation

    DTIC Science & Technology

    2016-03-01

    combat. U.S. Army RDECOM PRISM 6, no. 1 FEATURES | 91 Women Warriors Why the Robotics Revolution Changes the Combat Equation1 BY LINELL A. LETENDRE...underappreciated—fac- tor is poised to alter the women in combat debate: the revolution in robotics and autonomous systems. The technology leap afforded by...developing robotic and autonomous systems and their potential impact on the future of combat. Revolution in Robotics: A Changing Battlefield20 The

  4. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Posters for the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event are seen posted around the campus on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  5. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Panoramic of some of the exhibits available on the campus of the Worcester Polytechnic Institute (WPI) during their "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Anthony Shrout)

  6. Object recognition for autonomous robot utilizing distributed knowledge database

    NASA Astrophysics Data System (ADS)

    Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji

    2003-10-01

    In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.

  7. Motion Recognition and Modifying Motion Generation for Imitation Robot Based on Motion Knowledge Formation

    NASA Astrophysics Data System (ADS)

    Okuzawa, Yuki; Kato, Shohei; Kanoh, Masayoshi; Itoh, Hidenori

    A knowledge-based approach to imitation learning of motion generation for humanoid robots and an imitative motion generation system based on motion knowledge learning and modification are described. The system has three parts: recognizing, learning, and modifying parts. The first part recognizes an instructed motion distinguishing it from the motion knowledge database by the continuous hidden markov model. When the motion is recognized as being unfamiliar, the second part learns it using locally weighted regression and acquires a knowledge of the motion. When a robot recognizes the instructed motion as familiar or judges that its acquired knowledge is applicable to the motion generation, the third part imitates the instructed motion by modifying a learned motion. This paper reports some performance results: the motion imitation of several radio gymnastics motions.

  8. Ascending Stairway Modeling: A First Step Toward Autonomous Multi-Floor Exploration

    DTIC Science & Technology

    2012-10-01

    Many robotics platforms are capable of ascending stairways, but all existing approaches for autonomous stair climbing use stairway detection as a...the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor building. Our proposed solution to this problem is...over several steps. However, many ground robots are not capable of traversing tight spiral stairs , and so we do not focus on these types. The stairway is

  9. A developmental roadmap for learning by imitation in robots.

    PubMed

    Lopes, Manuel; Santos-Victor, José

    2007-04-01

    In this paper, we present a strategy whereby a robot acquires the capability to learn by imitation following a developmental pathway consisting on three levels: 1) sensory-motor coordination; 2) world interaction; and 3) imitation. With these stages, the system is able to learn tasks by imitating human demonstrators. We describe results of the different developmental stages, involving perceptual and motor skills, implemented in our humanoid robot, Baltazar. At each stage, the system's attention is drawn toward different entities: its own body and, later on, objects and people. Our main contributions are the general architecture and the implementation of all the necessary modules until imitation capabilities are eventually acquired by the robot. Also, several other contributions are made at each level: learning of sensory-motor maps for redundant robots, a novel method for learning how to grasp objects, and a framework for learning task description from observation for program-level imitation. Finally, vision is used extensively as the sole sensing modality (sometimes in a simplified setting) avoiding the need for special data-acquisition hardware.

  10. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    PubMed Central

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  11. Planning Flight Paths of Autonomous Aerobots

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric; Elfes, Alberto; Sharma, Shivanjli

    2009-01-01

    Algorithms for planning flight paths of autonomous aerobots (robotic blimps) to be deployed in scientific exploration of remote planets are undergoing development. These algorithms are also adaptable to terrestrial applications involving robotic submarines as well as aerobots and other autonomous aircraft used to acquire scientific data or to perform surveying or monitoring functions.

  12. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    Visitors, some with their dogs, line up to make their photo inside a space suit exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  13. Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging

    DTIC Science & Technology

    2016-01-01

    satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide

  14. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    The bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" is seen wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  15. How to make an autonomous robot as a partner with humans: design approach versus emergent approach.

    PubMed

    Fujita, M

    2007-01-15

    In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.

  16. Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.

  17. A New Simulation Framework for Autonomy in Robotic Missions

    NASA Technical Reports Server (NTRS)

    Flueckiger, Lorenzo; Neukom, Christian

    2003-01-01

    Autonomy is a key factor in remote robotic exploration and there is significant activity addressing the application of autonomy to remote robots. It has become increasingly important to have simulation tools available to test the autonomy algorithms. While indus1;rial robotics benefits from a variety of high quality simulation tools, researchers developing autonomous software are still dependent primarily on block-world simulations. The Mission Simulation Facility I(MSF) project addresses this shortcoming with a simulation toolkit that will enable developers of autonomous control systems to test their system s performance against a set of integrated, standardized simulations of NASA mission scenarios. MSF provides a distributed architecture that connects the autonomous system to a set of simulated components replacing the robot hardware and its environment.

  18. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-15

    SpacePRIDE Team members Chris Williamson, right, and Rob Moore, second from right, answer questions from 8th grade Sullivan Middle School (Mass.) students about their robot on Friday, June 15, 2012 at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. SpacePRIDE's robot team will compete for a $1.5 million NASA prize in the NASA-WPI Sample Return Robot Centennial Challenge at WPI. Teams have been challenged to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  19. Assisted Perception, Planning and Control for Remote Mobility and Dexterous Manipulation

    DTIC Science & Technology

    2017-04-01

    on unmanned aerial vehicles (UAVs). The underlying algorithm is based on an Extended Kalman Filter (EKF) that simultaneously estimates robot state...and sensor biases. The filter developed provided a probabilistic fusion of sensor data from many modalities to produce a single consistent position...estimation for a walking humanoid. Given a prior map using a Gaussian particle filter , the LIDAR based system is able to provide a drift-free

  20. Foundations for a Theory of Mind for a Humanoid Robot

    DTIC Science & Technology

    2001-05-01

    visual processing software and our understanding of how people interact with Lazlo. Thanks also to Jessica Banks, Charlie Kemp, and Juan Velasquez for...capacities of infants (e.g., Carey, 1999; Gelman , 1990). Furthermore, research on pervasive developmental disorders such as autism has focused on the se...Keil, 1995; Carey, 1995; Gelman et al., 1983). While the discrimination of animate from inanimate certainly relies upon many distinct properties

  1. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences

    PubMed Central

    2016-01-01

    Emerging studies indicate that several species such as corvids, apes and children solve ‘The Crow and the Pitcher’ task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause–effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended ‘learning–prediction–abstraction’ loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration. PMID:27466440

  2. Humanoid infers Archimedes' principle: understanding physical relations and object affordances through cumulative learning experiences.

    PubMed

    Bhat, Ajaz Ahmad; Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2016-07-01

    Emerging studies indicate that several species such as corvids, apes and children solve 'The Crow and the Pitcher' task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause-effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended 'learning-prediction-abstraction' loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration. © 2016 The Author(s).

  3. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    NASA Astrophysics Data System (ADS)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  4. Equipment Proposal for the Autonomous Vehicle Systems Laboratory at UIW

    DTIC Science & Technology

    2015-04-29

    testing, 5) 38 Lego Mindstorm EV3 and Hitechnic Sensors for use in feedback control and autonomous systems for STEM undergraduate and High School...autonomous robots using the Lego Mindstorm EV3. This robotics workshop will be used as a pilot study for next summer when more High School students

  5. Autonomous Robotic Weapons: US Army Innovation for Ground Combat in the Twenty-First Century

    DTIC Science & Technology

    2015-05-21

    2013, accessed March 29, 2015, http://www.bbc.com/news/magazine-21576376?print=true. 113 Steven Kotler, “Say Hello to Comrade Terminator: Russia’s... hello -to-comrade-terminator-russias-army-of- killer-robots/. 114 David Hambling, “Russia Wants Autonomous Fighting Robots, and Lots of Them: Putin’s...how-humans-respond-to- robots-knight/HumanRobot-PartnershipsR2.pdf?la=en. Kotler, Steven. “Say Hello to Comrade Terminator: Russia’s Army of

  6. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    A visitor to the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event helps demonstrate how a NASA rover design enables the rover to climb over obstacles higher than it's own body on Saturday, June 16, 2012 at WPI in Worcester, Mass. The event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  7. A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism

    PubMed Central

    Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan

    2013-01-01

    Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831

  8. Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 and Smart Autonomous Sand-Swimming Excavator

    NASA Technical Reports Server (NTRS)

    Sandy, Michael

    2015-01-01

    The Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 is an excavation robot for mining regolith on a planet like Mars. The robot is programmed using the Robotic Operating System (ROS) and it also uses a physical simulation program called Gazebo. This internship focused on various functions of the program in order to make it a more professional and efficient robot. During the internship another project called the Smart Autonomous Sand-Swimming Excavator was worked on. This is a robot that is designed to dig through sand and extract sample material. The intern worked on programming the Sand-Swimming robot, and designing the electrical system to power and control the robot.

  9. A small, cheap, and portable reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey

    2005-05-01

    While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.

  10. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  11. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  12. Development of a semi-autonomous service robot with telerobotic capabilities

    NASA Technical Reports Server (NTRS)

    Jones, J. E.; White, D. R.

    1987-01-01

    The importance to the United States of semi-autonomous systems for application to a large number of manufacturing and service processes is very clear. Two principal reasons emerge as the primary driving forces for development of such systems: enhanced national productivity and operation in environments whch are hazardous to humans. Completely autonomous systems may not currently be economically feasible. However, autonomous systems that operate in a limited operation domain or that are supervised by humans are within the technology capability of this decade and will likely provide reasonable return on investment. The two research and development efforts of autonomy and telerobotics are distinctly different, yet interconnected. The first addresses the communication of an intelligent electronic system with a robot while the second requires human communication and ergonomic consideration. Discussed here are work in robotic control, human/robot team implementation, expert system robot operation, and sensor development by the American Welding Institute, MTS Systems Corporation, and the Colorado School of Mines--Center for Welding Research.

  13. Sandia National Laboratories proof-of-concept robotic security vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, J.J.; Jones, D.P.; Klarer, P.R.

    1989-01-01

    Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less

  14. SLAM algorithm applied to robotics assistance for navigation in unknown environments.

    PubMed

    Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo

    2010-02-17

    The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.

  15. Vision-based semi-autonomous outdoor robot system to reduce soldier workload

    NASA Astrophysics Data System (ADS)

    Richardson, Al; Rodgers, Michael H.

    2001-09-01

    Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.

  16. Spectrally Queued Feature Selection for Robotic Visual Odometery

    DTIC Science & Technology

    2010-11-23

    in these systems has yet to be defined. 1. INTRODUCTION 1.1 Uses of Autonomous Vehicles Autonomous vehicles have a wide range of possible...applications. In military situations, autonomous vehicles are valued for their ability to keep Soldiers far away from danger. A robot can inspect and disarm...just a glimpse of what engineers are hoping for in the future. 1.2 Biological Influence Autonomous vehicles are becoming more of a possibility in

  17. Advances in Robotic, Human, and Autonomous Systems for Missions of Space Exploration

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)

    2002-01-01

    Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.

  18. Development of autonomous grasping and navigating robot

    NASA Astrophysics Data System (ADS)

    Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi

    2015-01-01

    The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.

  19. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  20. Robotic Assistance in Medication Management: Development and Evaluation of a Prototype.

    PubMed

    Schweitzer, Marco; Hoerbst, Alexander

    2016-01-01

    An increasing number of elderly people and the prevalence of multimorbid conditions often lead to age-related problems for patients in handling their common polypharmaceutical, domestic everyday medication. Ambient Assisted Living therefore provides means to support an elderly's everyday life. In the present paper we investigated the viability of using a commercial mass-produced humanoid robot system to support the domestic medication of an elderly person. A prototypical software application based on the NAO-robot platform was implemented to remind the patient for drug intakes, check for drug-drug-interactions, document the compliance and assist through the complete process of individual medication. A technical and functional evaluation of the system in a laboratory setting revealed versatile and viable results, though further investigations are needed to examine the practical use in an applied field.

  1. Autonomy in robots and other agents.

    PubMed

    Smithers, T

    1997-06-01

    The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.

  2. Application of autonomous robotized systems for the collection of nearshore topographic changing and hydrodynamic measurements

    NASA Astrophysics Data System (ADS)

    Belyakov, Vladimir; Makarov, Vladimir; Zezyulin, Denis; Kurkin, Andrey; Pelinovsky, Efim

    2015-04-01

    Hazardous phenomena in the coastal zone lead to the topographic changing which are difficulty inspected by traditional methods. It is why those autonomous robots are used for collection of nearshore topographic and hydrodynamic measurements. The robot RTS-Hanna is well-known (Wubbold, F., Hentschel, M., Vousdoukas, M., and Wagner, B. Application of an autonomous robot for the collection of nearshore topographic and hydrodynamic measurements. Coastal Engineering Proceedings, 2012, vol. 33, Paper 53). We describe here several constructions of mobile systems developed in Laboratory "Transported Machines and Transported Complexes", Nizhny Novgorod State Technical University. They can be used in the field surveys and monitoring of wave regimes nearshore.

  3. Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.

    1999-08-01

    A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.

  4. Terrain discovery and navigation of a multi-articulated linear robot using map-seeking circuits

    NASA Astrophysics Data System (ADS)

    Snider, Ross K.; Arathorn, David W.

    2006-05-01

    A significant challenge in robotics is providing a robot with the ability to sense its environment and then autonomously move while accommodating obstacles. The DARPA Grand Challenge, one of the most visible examples, set the goal of driving a vehicle autonomously for over a hundred miles avoiding obstacles along a predetermined path. Map-Seeking Circuits have shown their biomimetic capability in both vision and inverse kinematics and here we demonstrate their potential usefulness for intelligent exploration of unknown terrain using a multi-articulated linear robot. A robot that could handle any degree of terrain complexity would be useful for exploring inaccessible crowded spaces such as rubble piles in emergency situations, patrolling/intelligence gathering in tough terrain, tunnel exploration, and possibly even planetary exploration. Here we simulate autonomous exploratory navigation by an interaction of terrain discovery using the multi-articulated linear robot to build a local terrain map and exploitation of that growing terrain map to solve the propulsion problem of the robot.

  5. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture.

    PubMed

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2013-01-01

    The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value.

  6. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture

    PubMed Central

    Li, Cai; Lowe, Robert; Ziemke, Tom

    2013-01-01

    The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value. PMID:23675345

  7. The effect of collision avoidance for autonomous robot team formation

    NASA Astrophysics Data System (ADS)

    Seidman, Mark H.; Yang, Shanchieh J.

    2007-04-01

    As technology and research advance to the era of cooperative robots, many autonomous robot team algorithms have emerged. Shape formation is a common and critical task in many cooperative robot applications. While theoretical studies of robot team formation have shown success, it is unclear whether such algorithms will perform well in a real-world environment. This work examines the effect of collision avoidance schemes on an ideal circle formation algorithm, but behaves similarly if robot-to-robot communications are in place. Our findings reveal that robots with basic collision avoidance capabilities are still able to form into a circle, under most conditions. Moreover, the robot sizes, sensing ranges, and other critical physical parameters are examined to determine their effects on algorithm's performance.

  8. Robotics and artificial intelligence: Jewish ethical perspectives.

    PubMed

    Rappaport, Z H

    2006-01-01

    In 16th Century Prague, Rabbi Loew created a Golem, a humanoid made of clay, to protect his community. When the Golem became too dangerous to his surroundings, he was dismantled. This Jewish theme illustrates some of the guiding principles in its approach to the moral dilemmas inherent in future technologies, such as artificial intelligence and robotics. Man is viewed as having received the power to improve upon creation and develop technologies to achieve them, with the proviso that appropriate safeguards are taken. Ethically, not-harming is viewed as taking precedence over promoting good. Jewish ethical thinking approaches these novel technological possibilities with a cautious optimism that mankind will derive their benefits without coming to harm.

  9. Basic emotions and adaptation. A computational and evolutionary model.

    PubMed

    Pacella, Daniela; Ponticorvo, Michela; Gigliotta, Onofrio; Miglino, Orazio

    2017-01-01

    The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions.

  10. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    PubMed Central

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  11. Space Shuttle Discovery is Prepared for Launch

    NASA Image and Video Library

    2011-02-23

    The space shuttle Discovery is seen shortly after the Rotating Service Structure was rolled back at launch pad 39A, at the Kennedy Space Center in Cape Canaveral, Florida, on Wednesday, Feb. 23, 2011. Discovery, on its 39th and final flight, will carry the Italian-built Permanent Multipurpose Module (PMM), Express Logistics Carrier 4 (ELC4) and Robonaut 2, the first humanoid robot in space to the International Space Station. Photo Credit: (NASA/Bill Ingalls)

  12. Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver’s Gift

    DTIC Science & Technology

    2004-05-01

    system . We propose a real- time algorithm to infer depth and build 3-dimensional coarse maps for objects through the analysis of cues provided by an... system is well defined at the boundary of these regions (although the derivatives are not). A time domain analysis is presented for a piece-linear... Analysis of Multivariable Systems ......................... 266 D.3.1 Networks of Multiple Neural Oscillators ................. 266 D.3.2 Networks of

  13. An integrated design and fabrication strategy for entirely soft, autonomous robots.

    PubMed

    Wehner, Michael; Truby, Ryan L; Fitzgerald, Daniel J; Mosadegh, Bobak; Whitesides, George M; Lewis, Jennifer A; Wood, Robert J

    2016-08-25

    Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.

  14. Experiments in autonomous robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamel, W.R.

    1987-01-01

    The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.

  15. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Program Manager for Centennial Challenges Sam Ortega help show a young visitor how to drive a rover as part of the interactive NASA Mars rover exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  16. Sample Return Robot Centennial Challenge

    NASA Image and Video Library

    2012-06-16

    NASA Deputy Administrator Lori Garver and NASA Chief Technologist Mason Peck stop to look at the bronze statue of the goat mascot for Worcester Polytechnic Institute (WPI) named "Gompei" that is wearing a staff t-shirt for the "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)

  17. Control strategies for robots in contact

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung

    In the field of robotics, there is a growing need to provide robots with the ability to interact with complex and unstructured environments. Operations in such environments pose significant challenges in terms of sensing, planning, and control. In particular, it is critical to design control algorithms that account for the dynamics of the robot and environment at multiple contacts. The work in this thesis focuses on the development of a control framework that addresses these issues. The approaches are based on the operational space control framework and estimation methods. By accounting for the dynamics of the robot and environment, modular and systematic methods are developed for robots interacting with the environment at multiple locations. The proposed force control approach demonstrates high performance in the presence of uncertainties. Building on this basic capability, new control algorithms have been developed for haptic teleoperation, multi-contact interaction with the environment, and whole body motion of non-fixed based robots. These control strategies have been experimentally validated through simulations and implementations on physical robots. The results demonstrate the effectiveness of the new control structure and its robustness to uncertainties. The contact control strategies presented in this thesis are expected to contribute to the needs in advanced controller design for humanoid and other complex robots interacting with their environments.

  18. Autonomous Legged Hill and Stairwell Ascent

    DTIC Science & Technology

    2011-11-01

    environments with little burden to a human operator. Keywords: autonomous robot , hill climbing , stair climbing , sequential composition, hexapod, self...X-RHex robot on a set of stairs with laser scanner, IMU, wireless repeater, and handle payloads. making them useful for both climbing hills and...reconciliation into that more powerful (but restrictive) framework. 1) The Stair Climbing Behavior: RHex robots have been climbing single-flight stairs

  19. Semi autonomous mine detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Few; Roelof Versteeg; Herman Herman

    2010-04-01

    CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required for navigation and marking, a countermine sensors and a number of integrated software packages which provide for real time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic systems. To address the autonomy CIF, the INL developed RIKmore » was extended to allow for interaction with a mine sensor processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial technical barriers exist which preclude – from an autonomous robotic perspective – the rapid development and deployment of fieldable systems.« less

  20. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Team KuuKulgur waits to begin the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  1. KSC-2012-4344

    NASA Image and Video Library

    2012-08-09

    CAPE CANAVERAL, Fla. – During a free-flight test of the Project Morpheus vehicle at the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida, the vehicle lifted off the ground and then experienced a hardware component failure, which prevented it from maintaining stable flight. Engineers are looking into the test data and the agency will release information as it becomes available. Failures such as these were anticipated prior to the test, and are part of the development process for any complex spaceflight hardware. Testing of the prototype lander had been ongoing at NASA’s Johnson Space Center in Houston in preparation for its first free-flight test at Kennedy Space Center. Morpheus was manufactured and assembled at JSC and Armadillo Aerospace. Morpheus is large enough to carry 1,100 pounds of cargo to the moon – for example, a humanoid robot, a small rover, or a small laboratory to convert moon dust into oxygen. The primary focus of the test is to demonstrate an integrated propulsion and guidance, navigation and control system that can fly a lunar descent profile to exercise the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, safe landing sensors and closed-loop flight control. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA

  2. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    PubMed

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  3. Proceedings of the 1989 CESAR/CEA (Center for Engineering Systems Advanced Research/Commissariat a l'Energie Atomique) workshop on autonomous mobile robots (May 30--June 1, 1989)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.; Pin, F.G.

    1990-03-01

    The US DOE Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) and the Commissariat a l'Energie Atomique's (CEA) Office de Robotique et Productique within the Directorat a la Valorization are working toward a long-term cooperative agreement and relationship in the area of Intelligent Systems Research (ISR). This report presents the proceedings of the first CESAR/CEA Workshop on Autonomous Mobile Robots which took place at ORNL on May 30, 31 and June 1, 1989. The purpose of the workshop was to present and discuss methodologies and algorithms under development at the two facilities in themore » area of perception and navigation for autonomous mobile robots in unstructured environments. Experimental demonstration of the algorithms and comparison of some of their features were proposed to take place within the framework of a previously mutually agreed-upon demonstration scenario or base-case.'' The base-case scenario described in detail in Appendix A, involved autonomous navigation by the robot in an a priori unknown environment with dynamic obstacles, in order to reach a predetermined goal. From the intermediate goal location, the robot had to search for and locate a control panel, move toward it, and dock in front of the panel face. The CESAR demonstration was successfully accomplished using the HERMIES-IIB robot while subsets of the CEA demonstration performed using the ARES robot simulation and animation system were presented. The first session of the workshop focused on these experimental demonstrations and on the needs and considerations for establishing benchmarks'' for testing autonomous robot control algorithms.« less

  4. Navigation strategies for multiple autonomous mobile robots moving in formation

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1991-01-01

    The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.

  5. Semi-autonomous exploration of multi-floor buildings with a legged robot

    NASA Astrophysics Data System (ADS)

    Wenger, Garrett J.; Johnson, Aaron M.; Taylor, Camillo J.; Koditschek, Daniel E.

    2015-05-01

    This paper presents preliminary results of a semi-autonomous building exploration behavior using the hexapedal robot RHex. Stairwells are used in virtually all multi-floor buildings, and so in order for a mobile robot to effectively explore, map, clear, monitor, or patrol such buildings it must be able to ascend and descend stairwells. However most conventional mobile robots based on a wheeled platform are unable to traverse stairwells, motivating use of the more mobile legged machine. This semi-autonomous behavior uses a human driver to provide steering input to the robot, as would be the case in, e.g., a tele-operated building exploration mission. The gait selection and transitions between the walking and stair climbing gaits are entirely autonomous. This implementation uses an RGBD camera for stair acquisition, which offers several advantages over a previously documented detector based on a laser range finder, including significantly reduced acquisition time. The sensor package used here also allows for considerable expansion of this behavior. For example, complete automation of the building exploration task driven by a mapping algorithm and higher level planner is presently under development.

  6. Behavior-based multi-robot collaboration for autonomous construction tasks

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew

    2005-01-01

    The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.

  7. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke.

    PubMed

    Johnson, Michelle J

    2006-12-18

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  8. The AGINAO Self-Programming Engine

    NASA Astrophysics Data System (ADS)

    Skaba, Wojciech

    2013-01-01

    The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.

  9. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke

    PubMed Central

    Johnson, Michelle J

    2006-01-01

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home. PMID:17176474

  10. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1990-01-01

    A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.

  11. Biomechanics of Step Initiation After Balance Recovery With Implications for Humanoid Robot Locomotion.

    PubMed

    Miller Buffinton, Christine; Buffinton, Elise M; Bieryla, Kathleen A; Pratt, Jerry E

    2016-03-01

    Balance-recovery stepping is often necessary for both a human and humanoid robot to avoid a fall by taking a single step or multiple steps after an external perturbation. The determination of where to step to come to a complete stop has been studied, but little is known about the strategy for initiation of forward motion from the static position following such a step. The goal of this study was to examine the human strategy for stepping by moving the back foot forward from a static, double-support position, comparing parameters from normal step length (SL) to those from increasing SLs to the point of step failure, to provide inspiration for a humanoid control strategy. Healthy young adults instrumented with joint reflective markers executed a prescribed-length step from rest while marker positions and ground reaction forces (GRFs) were measured. The participants were scaled to the Gait2354 model in opensim software to calculate body kinematic and joint kinetic parameters, with further post-processing in matlab. With increasing SL, participants reduced both static and push-off back-foot GRF. Body center of mass (CoM) lowered and moved forward, with additional lowering at the longer steps, and followed a path centered within the initial base of support (BoS). Step execution was successful if participants gained enough forward momentum at toe-off to move the instantaneous capture point (ICP) to within the BoS defined by the final position of both feet on the front force plate. All lower extremity joint torques increased with SL except ankle joint. Front knee work increased dramatically with SL, accompanied by decrease in back-ankle work. As SL increased, the human strategy changed, with participants shifting their CoM forward and downward before toe-off, thus gaining forward momentum, while using less propulsive work from the back ankle and engaging the front knee to straighten the body. The results have significance for human motion, suggesting the upper limit of the SL that can be completed with back-ankle push-off before additional knee flexion and torque is needed. For biped control, the results support stability based on capture-point dynamics and suggest strategy for center-of-mass trajectory and distribution of ground force reactions that can be compared with robot controllers for initiation of gait after recovery steps.

  12. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    PubMed Central

    2010-01-01

    Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735

  13. Autonomous mobile robot research using the HERMIES-III robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Beckerman, M.; Spelt, P.F.

    1989-01-01

    This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less

  14. Socially grounded game strategy enhances bonding and perceived smartness of a humanoid robot

    NASA Astrophysics Data System (ADS)

    Barakova, E. I.; De Haas, M.; Kuijpers, W.; Irigoyen, N.; Betancourt, A.

    2018-01-01

    In search for better technological solutions for education, we adapted a principle from economic game theory, namely that giving a help will promote collaboration and eventually long-term relations between a robot and a child. This principle has been shown to be effective in games between humans and between humans and computer agents. We compared the social and cognitive engagement of children when playing checkers game combined with a social strategy against a robot or against a computer. We found that by combining the social and game strategy the children (average age of 8.3 years) had more empathy and social engagement with the robot since the children did not want to necessarily win against it. This finding is promising for using social strategies for the creation of long-term relations between robots and children and making educational tasks more engaging. An additional outcome of the study was the significant difference in the perception of the children about the difficulty of the game - the game with the robot was seen as more challenging and the robot - as a smarter opponent. This finding might be due to the higher perceived or expected intelligence from the robot, or because of the higher complexity of seeing patterns in three-dimensional world.

  15. Progress in EEG-Based Brain Robot Interaction Systems

    PubMed Central

    Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe

    2017-01-01

    The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488

  16. Primate Anatomy, Kinematics, and Principles for Humanoid Design

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert O.; Ambrose, Catherine G.

    2004-01-01

    The primate order of animals is investigated for clues in the design of Humanoid Robots. The pursuit is directed with a theory that kinematics, musculature, perception, and cognition can be optimized for specific tasks by varying the proportions of limbs, and in particular, the points of branching in kinematic trees such as the primate skeleton. Called the Bifurcated Chain Hypothesis, the theory is that the branching proportions found in humans may be superior to other animals and primates for the tasks of dexterous manipulation and other human specialties. The primate taxa are defined, contemporary primate evolution hypotheses are critiqued, and variations within the order are noted. The kinematic branching points of the torso, limbs and fingers are studied for differences in proportions across the order, and associated with family and genus capabilities and behaviors. The human configuration of a long waist, long neck, and short arms is graded using a kinematic workspace analysis and a set of design axioms for mobile manipulation robots. It scores well. The re emergence of the human waist, seen in early Prosimians and Monkeys for arboreal balance, but lost in the terrestrial Pongidae, is postulated as benefiting human dexterity. The human combination of an articulated waist and neck will be shown to enable the use of smaller arms, achieving greater regions of workspace dexterity than the larger limbs of Gorillas and other Hominoidea.

  17. Elastic MCF Rubber with Photovoltaics and Sensing for Use as Artificial or Hybrid Skin (H-Skin): 1st Report on Dry-Type Solar Cell Rubber with Piezoelectricity for Compressive Sensing.

    PubMed

    Shimada, Kunio

    2018-06-05

    Ordinary solar cells are very difficult to bend, squash by compression, or extend by tensile strength. However, if they were to possess elastic, flexible, and extensible properties, in addition to piezo-electricity and resistivity, they could be put to effective use as artificial skin installed over human-like robots or humanoids. Further, it could serve as a husk that generates electric power from solar energy and perceives any force or temperature changes. Therefore, we propose a new type of artificial skin, called hybrid skin (H-Skin), for a humanoid robot having hybrid functions. In this study, a novel elastic solar cell is developed from natural rubber that is electrolytically polymerized with a configuration of magnetic clusters of metal particles incorporated into the rubber, by applying a magnetic field. The material thus produced is named magnetic compound fluid rubber (MCF rubber) that is elastic, flexible, and extensible. The present report deals with a dry-type MCF rubber solar cell that uses photosensitized dye molecules. First, the photovoltaic mechanism in the material is investigated. Next, the changes in the photovoltaic properties of its molecules due to irradiation by visible light are measured under compression. The effect of the compression on its piezoelectric properties is investigated.

  18. An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence

    NASA Technical Reports Server (NTRS)

    Lindley, Craig A.

    1993-01-01

    This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.

  19. Autonomous assistance navigation for robotic wheelchairs in confined spaces.

    PubMed

    Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F

    2010-01-01

    In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.

  20. Tracked robot controllers for climbing obstacles autonomously

    NASA Astrophysics Data System (ADS)

    Vincent, Isabelle

    2009-05-01

    Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.

  1. Crew/Robot Coordinated Planetary EVA Operations at a Lunar Base Analog Site

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Bluethmann, W. J.; Delgado, F. J.; Herrera, E.; Kosmo, J. J.; Janoiko, B. A.; Wilcox, B. H.; Townsend, J. A.; Matthews, J. B.; hide

    2007-01-01

    Under the direction of NASA's Exploration Technology Development Program, robots and space suited subjects from several NASA centers recently completed a very successful demonstration of coordinated activities indicative of base camp operations on the lunar surface. For these activities, NASA chose a site near Meteor Crater, Arizona close to where Apollo Astronauts previously trained. The main scenario demonstrated crew returning from a planetary EVA (extra-vehicular activity) to a temporary base camp and entering a pressurized rover compartment while robots performed tasks in preparation for the next EVA. Scenario tasks included: rover operations under direct human control and autonomous modes, crew ingress and egress activities, autonomous robotic payload removal and stowage operations under both local control and remote control from Houston, and autonomous robotic navigation and inspection. In addition to the main scenario, participants had an opportunity to explore additional robotic operations: hill climbing, maneuvering heaving loads, gathering geo-logical samples, drilling, and tether operations. In this analog environment, the suited subjects and robots experienced high levels of dust, rough terrain, and harsh lighting.

  2. Human motion characteristics in relation to feeling familiar or frightened during an announced short interaction with a proactive humanoid.

    PubMed

    Baddoura, Ritta; Venture, Gentiane

    2014-01-01

    During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions.

  3. Seeing Minds in Others – Can Agents with Robotic Appearance Have Human-Like Preferences?

    PubMed Central

    Martini, Molly C.; Gonzalez, Christian A.; Wiese, Eva

    2016-01-01

    Ascribing mental states to non-human agents has been shown to increase their likeability and lead to better joint-task performance in human-robot interaction (HRI). However, it is currently unclear what physical features non-human agents need to possess in order to trigger mind attribution and whether different aspects of having a mind (e.g., feeling pain, being able to move) need different levels of human-likeness before they are readily ascribed to non-human agents. The current study addresses this issue by modeling how increasing the degree of human-like appearance (on a spectrum from mechanistic to humanoid to human) changes the likelihood by which mind is attributed towards non-human agents. We also test whether different internal states (e.g., being hungry, being alive) need different degrees of humanness before they are ascribed to non-human agents. The results suggest that the relationship between physical appearance and the degree to which mind is attributed to non-human agents is best described as a two-linear model with no change in mind attribution on the spectrum from mechanistic to humanoid robot, but a significant increase in mind attribution as soon as human features are included in the image. There seems to be a qualitative difference in the perception of mindful versus mindless agents given that increasing human-like appearance alone does not increase mind attribution until a certain threshold is reached, that is: agents need to be classified as having a mind first before the addition of more human-like features significantly increases the degree to which mind is attributed to that agent. PMID:26745500

  4. Human motion characteristics in relation to feeling familiar or frightened during an announced short interaction with a proactive humanoid

    PubMed Central

    Baddoura, Ritta; Venture, Gentiane

    2014-01-01

    During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions. PMID:24688466

  5. Robot Comedy Lab: experimenting with the social dynamics of live performance

    PubMed Central

    Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias

    2015-01-01

    The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts. PMID:26379585

  6. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  7. Fully decentralized control of a soft-bodied robot inspired by true slime mold.

    PubMed

    Umedachi, Takuya; Takeda, Koichi; Nakagaki, Toshiyuki; Kobayashi, Ryo; Ishiguro, Akio

    2010-03-01

    Animals exhibit astoundingly adaptive and supple locomotion under real world constraints. In order to endow robots with similar capabilities, we must implement many degrees of freedom, equivalent to animals, into the robots' bodies. For taming many degrees of freedom, the concept of autonomous decentralized control plays a pivotal role. However a systematic way of designing such autonomous decentralized control system is still missing. Aiming at understanding the principles that underlie animals' locomotion, we have focused on a true slime mold, a primitive living organism, and extracted a design scheme for autonomous decentralized control system. In order to validate this design scheme, this article presents a soft-bodied amoeboid robot inspired by the true slime mold. Significant features of this robot are twofold: (1) the robot has a truly soft and deformable body stemming from real-time tunable springs and protoplasm, the former is used for an outer skin of the body and the latter is to satisfy the law of conservation of mass; and (2) fully decentralized control using coupled oscillators with completely local sensory feedback mechanism is realized by exploiting the long-distance physical interaction between the body parts stemming from the law of conservation of protoplasmic mass. Simulation results show that this robot exhibits highly supple and adaptive locomotion without relying on any hierarchical structure. The results obtained are expected to shed new light on design methodology for autonomous decentralized control system.

  8. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  9. A development of intelligent entertainment robot for home life

    NASA Astrophysics Data System (ADS)

    Kim, Cheoltaek; Lee, Ju-Jang

    2005-12-01

    The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.

  10. Research state-of-the-art of mobile robots in China

    NASA Astrophysics Data System (ADS)

    Wu, Lin; Zhao, Jinglun; Zhang, Peng; Li, Shiqing

    1991-03-01

    Several newly developed mobile robots in china are described in the paper. It includes masterslave telerobot sixleged robot biped walking robot remote inspection robot crawler moving robot and autonomous mobi le vehicle . Some relevant technology are also described.

  11. Full autonomous microline trace robot

    NASA Astrophysics Data System (ADS)

    Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan

    2000-10-01

    Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.

  12. Fusing Laser Reflectance and Image Data for Terrain Classification for Small Autonomous Robots

    DTIC Science & Technology

    2014-12-01

    limit us to low power, lightweight sensors , and a maximum range of approximately 5 meters. Contrast these robot characteristics to typical terrain...classifi- cation work which uses large autonomous ground vehicles with sensors mounted high above the ground. Terrain classification for small autonomous...into predefined classes [10], [11]. However, wheeled vehicles offer the ability to use non-traditional sensors such as vibration sensors [12] and

  13. INL Autonomous Navigation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.

  14. Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2

    DTIC Science & Technology

    2015-03-01

    distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface

  15. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  16. From Autonomous Robots to Artificial Ecosystems

    NASA Astrophysics Data System (ADS)

    Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato

    During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.

  17. Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANCINI,THOMAS R.

    2001-04-01

    Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less

  18. On-rail solution for autonomous inspections in electrical substations

    NASA Astrophysics Data System (ADS)

    Silva, Bruno P. A.; Ferreira, Rafael A. M.; Gomes, Selson C.; Calado, Flavio A. R.; Andrade, Roberto M.; Porto, Matheus P.

    2018-05-01

    This work presents an alternative solution for autonomous inspections in electrical substations. The autonomous system is a robot that moves on rails, collects infrared and visible images of selected targets, also processes the data and predicts the components lifetime. The robot moves on rails to overcome difficulties found in not paved substations commonly encountered in Brazil. We take advantage of using rails to convey the data by them, minimizing the electromagnetic interference, and at the same time transmitting electrical energy to feed the autonomous system. As part of the quality control process, we compared thermographic inspections made by the robot with inspections made by a trained thermographer using a scientific camera Flir® SC660. The results have shown that the robot achieved satisfactory results, identifying components and measuring temperature accurately. The embodied routine considers the weather changes along the day, providing a standard result of the components thermal response, also gives the uncertainty of temperature measurement, contributing to the quality in the decision making process.

  19. Adaptive Behavior for Mobile Robots

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2009-01-01

    The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

  20. Hybrid position and orientation tracking for a passive rehabilitation table-top robot.

    PubMed

    Wojewoda, K K; Culmer, P R; Gallagher, J F; Jackson, A E; Levesley, M C

    2017-07-01

    This paper presents a real time hybrid 2D position and orientation tracking system developed for an upper limb rehabilitation robot. Designed to work on a table-top, the robot is to enable home-based upper-limb rehabilitative exercise for stroke patients. Estimates of the robot's position are computed by fusing data from two tracking systems, each utilizing a different sensor type: laser optical sensors and a webcam. Two laser optical sensors are mounted on the underside of the robot and track the relative motion of the robot with respect to the surface on which it is placed. The webcam is positioned directly above the workspace, mounted on a fixed stand, and tracks the robot's position with respect to a fixed coordinate system. The optical sensors sample the position data at a higher frequency than the webcam, and a position and orientation fusion scheme is proposed to fuse the data from the two tracking systems. The proposed fusion scheme is validated through an experimental set-up whereby the rehabilitation robot is moved by a humanoid robotic arm replicating previously recorded movements of a stroke patient. The results prove that the presented hybrid position tracking system can track the position and orientation with greater accuracy than the webcam or optical sensors alone. The results also confirm that the developed system is capable of tracking recovery trends during rehabilitation therapy.

  1. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  2. Future Challenges of Robotics and Artificial Intelligence in Nursing: What Can We Learn from Monsters in Popular Culture?

    PubMed

    Erikson, Henrik; Salzmann-Erikson, Martin

    It is highly likely that artificial intelligence (AI) will be implemented in nursing robotics in various forms, both in medical and surgical robotic instruments, but also as different types of droids and humanoids, physical reinforcements, and also animal/pet robots. Exploring and discussing AI and robotics in nursing and health care before these tools become commonplace is of great importance. We propose that monsters in popular culture might be studied with the hope of learning about situations and relationships that generate empathic capacities in their monstrous existences. The aim of the article is to introduce the theoretical framework and assumptions behind this idea. Both robots and monsters are posthuman creations. The knowledge we present here gives ideas about how nursing science can address the postmodern, technologic, and global world to come. Monsters therefore serve as an entrance to explore technologic innovations such as AI. Analyzing when and why monsters step out of character can provide important insights into the conceptualization of caring and nursing as a science, which is important for discussing these empathic protocols, as well as more general insight into human knowledge. The relationship between caring, monsters, robotics, and AI is not as farfetched as it might seem at first glance.

  3. Future Challenges of Robotics and Artificial Intelligence in Nursing: What Can We Learn from Monsters in Popular Culture?

    PubMed Central

    Erikson, Henrik; Salzmann-Erikson, Martin

    2016-01-01

    It is highly likely that artificial intelligence (AI) will be implemented in nursing robotics in various forms, both in medical and surgical robotic instruments, but also as different types of droids and humanoids, physical reinforcements, and also animal/pet robots. Exploring and discussing AI and robotics in nursing and health care before these tools become commonplace is of great importance. We propose that monsters in popular culture might be studied with the hope of learning about situations and relationships that generate empathic capacities in their monstrous existences. The aim of the article is to introduce the theoretical framework and assumptions behind this idea. Both robots and monsters are posthuman creations. The knowledge we present here gives ideas about how nursing science can address the postmodern, technologic, and global world to come. Monsters therefore serve as an entrance to explore technologic innovations such as AI. Analyzing when and why monsters step out of character can provide important insights into the conceptualization of caring and nursing as a science, which is important for discussing these empathic protocols, as well as more general insight into human knowledge. The relationship between caring, monsters, robotics, and AI is not as farfetched as it might seem at first glance. PMID:27455058

  4. AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program

    NASA Astrophysics Data System (ADS)

    Gothard, Benny M.

    2002-02-01

    One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.

  5. On-Line Point Positioning with Single Frame Camera Data

    DTIC Science & Technology

    1992-03-15

    tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile

  6. Inexpensive robots used to teach dc circuits and electronics

    NASA Astrophysics Data System (ADS)

    Sidebottom, David L.

    2017-05-01

    This article describes inexpensive, autonomous robots, built without microprocessors, used in a college-level introductory physics laboratory course to motivate student learning of dc circuits. Detailed circuit descriptions are provided as well as a week-by-week course plan that can guide students from elementary dc circuits, through Kirchhoff's laws, and into simple analog integrated circuits with the motivational incentive of building an autonomous robot that can compete with others in a public arena.

  7. [Supporting an ASD child with digital tools].

    PubMed

    Vallart, Etienne; Gicquel, Ludovic

    Autism spectrum disorders lead to a long-term and severe impairment of communication and social interactions. The expansion of information and communication technologies, through digital applications which can be used on different devices, can be used to support these functions necessary for the development of children with ASD. Applications, serious games and even humanoid robots help to boost children's interest in learning. They must however form part of a broader range of therapies. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  8. KSC-2010-4379

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, Ron Diftler, NASA Robonaut project manager, describes the animation of the dexterous humanoid astronaut helper, Robonaut (R2) to the media. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  9. KSC-2010-4378

    NASA Image and Video Library

    2010-08-12

    CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, Ron Diftler, NASA Robonaut project manager, describes the animation of the dexterous humanoid astronaut helper, Robonaut (R2) to the media. R2 will fly to the International Space Station aboard space shuttle Discovery on the STS-133 mission. Although it will initially only participate in operational tests, upgrades could eventually allow the robot to realize its true purpose -- helping spacewalking astronauts with tasks outside the space station. Photo credit: NASA/Jim Grossmann

  10. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    Sample Return Robot Challenge staff members confer before the team Survey robots makes it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  11. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    A robot from the University of Waterloo Robotics Team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  12. Robot Lies in Health Care: When Is Deception Morally Permissible?

    PubMed

    Matthias, Andreas

    2015-06-01

    Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

  13. Supervised autonomous robotic soft tissue surgery.

    PubMed

    Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W

    2016-05-04

    The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. Copyright © 2016, American Association for the Advancement of Science.

  14. Autonomous intelligent military robots: Army ants, killer bees, and cybernetic soldiers

    NASA Astrophysics Data System (ADS)

    Finkelstein, Robert

    The rationale for developing autonomous intelligent robots in the military is to render conventional warfare systems ineffective and indefensible. The Desert Storm operation demonstrated the effectiveness of such systems as unmanned air and ground vehicles and indicated the future possibilities of robotic technology. Robotic military vehicles would have the advantages of expendability, low cost, lower complexity compared to manned systems, survivability, maneuverability, and a capability to share in instantaneous communication and distributed processing of combat information. Basic characteristics of intelligent systems and hierarchical control systems with sensor inputs are described. Genetic algorithms are seen as a means of achieving appropriate levels of intelligence in a robotic system. Potential impacts of robotic technology in the military are outlined.

  15. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Team KuuKulgur watches as their robots attempt the level one competition during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  16. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Retrievers team robot is seen as it attempts the level one challenge the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  17. Plugin-docking system for autonomous charging using particle filter

    NASA Astrophysics Data System (ADS)

    Koyasu, Hiroshi; Wada, Masayoshi

    2017-03-01

    Autonomous charging of the robot battery is one of the key functions for the sake of expanding working areas of the robots. To realize it, most of existing systems use custom docking stations or artificial markers. By the other words, they can only charge on a few specific outlets. If the limit can be removed, working areas of the robots significantly expands. In this paper, we describe a plugin-docking system for the autonomous charging, which does not require any custom docking stations or artificial markers. A single camera is used for recognizing the 3D position of an outlet socket. A particle filter-based image tracking algorithm which is robust to the illumination change is applied. The algorithm is implemented on a robot with an omnidirectional moving system. The experimental results show the effectiveness of our system.

  18. A Review of Robotics in Neurorehabilitation: Towards an Automated Process for Upper Limb

    PubMed Central

    Sánchez-Herrera, P.; Balaguer, C.; Jardón, A.

    2018-01-01

    Robot-mediated neurorehabilitation is a growing field that seeks to incorporate advances in robotics combined with neuroscience and rehabilitation to define new methods for treating problems related with neurological diseases. In this paper, a systematic literature review is conducted to identify the contribution of robotics for upper limb neurorehabilitation, highlighting its relation with the rehabilitation cycle, and to clarify the prospective research directions in the development of more autonomous rehabilitation processes. With this aim, first, a study and definition of a general rehabilitation process are made, and then, it is particularized for the case of neurorehabilitation, identifying the components involved in the cycle and their degree of interaction between them. Next, this generic process is compared with the current literature in robotics focused on upper limb treatment, analyzing which components of this rehabilitation cycle are being investigated. Finally, the challenges and opportunities to obtain more autonomous rehabilitation processes are discussed. In addition, based on this study, a series of technical requirements that should be taken into account when designing and implementing autonomous robotic systems for rehabilitation is presented and discussed. PMID:29707189

  19. Dynamic multisensor fusion for mobile robot navigation in an indoor environment

    NASA Astrophysics Data System (ADS)

    Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.

    2001-10-01

    In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.

  20. Development of an Interactive Augmented Environment and Its Application to Autonomous Learning for Quadruped Robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi

    This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.

  1. Autonomous Realtime Threat-Hunting Robot (ARTHR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    INL

    2008-05-29

    Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.

  2. Autonomous Realtime Threat-Hunting Robot (ARTHR

    ScienceCinema

    INL

    2017-12-09

    Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats.

  3. Supervisory autonomous local-remote control system design: Near-term and far-term applications

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul

    1993-01-01

    The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.

  4. An architectural approach to create self organizing control systems for practical autonomous robots

    NASA Technical Reports Server (NTRS)

    Greiner, Helen

    1991-01-01

    For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.

  5. Autonomous bone reposition around anatomical landmark for robot-assisted orthognathic surgery.

    PubMed

    Woo, Sang-Yoon; Lee, Sang-Jeong; Yoo, Ji-Yong; Han, Jung-Joon; Hwang, Soon-Jung; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Yi, Won-Jin

    2017-12-01

    The purpose of this study was to develop a new method for enabling a robot to assist a surgeon in repositioning a bone segment to accurately transfer a preoperative virtual plan into the intraoperative phase in orthognathic surgery. We developed a robot system consisting of an arm with six degrees of freedom, a robot motion-controller, and a PC. An end-effector at the end of the robot arm transferred the movements of the robot arm to the patient's jawbone. The registration between the robot and CT image spaces was performed completely preoperatively, and the intraoperative registration could be finished using only position changes of the tracking tools at the robot end-effector and the patient's splint. The phantom's maxillomandibular complex (MMC) connected to the robot's end-effector was repositioned autonomously by the robot movements around an anatomical landmark of interest based on the tool center point (TCP) principle. The robot repositioned the MMC around the TCP of the incisor of the maxilla and the pogonion of the mandible following plans for real orthognathic patients. The accuracy of the robot's repositioning increased when an anatomical landmark for the TCP was close to the registration fiducials. In spite of this influence, we could increase the repositioning accuracy at the landmark by using the landmark itself as the TCP. With its ability to incorporate virtual planning using a CT image and autonomously execute the plan around an anatomical landmark of interest, the robot could help surgeons reposition bones more accurately and dexterously. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Autonomous planetary rover

    NASA Astrophysics Data System (ADS)

    Krotkov, Eric; Simmons, Reid; Whittaker, William

    1992-02-01

    This report describes progress in research on an autonomous robot for planetary exploration performed during 1991 at the Robotics Institute, Carnegie Mellon University. The report summarizes the achievements during calendar year 1991, and lists personnel and publications. In addition, it includes several papers resulting from the research. Research in 1991 focused on understanding the unique capabilities of the Ambler mechanism and on autonomous walking in rough, natural terrain. We also designed a sample acquisition system, and began to configure a successor to the Ambler.

  7. Toward Autonomous Multi-floor Exploration: Ascending Stairway Localization and Modeling

    DTIC Science & Technology

    2013-03-01

    robots have traditionally been restricted to single floors of a building or outdoor areas free of abrupt elevation changes such as curbs and stairs ...solution to this problem and is motivated by the rich potential of an autonomous ground robot that can climb stairs while exploring a multi-floor...parameters of the stairways, the robot could plan a path that traverses the stairs in order to explore the frontier at other elevations that were previously

  8. Automation and robotics technology for intelligent mining systems

    NASA Technical Reports Server (NTRS)

    Welsh, Jeffrey H.

    1989-01-01

    The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.

  9. Science, technology and the future of small autonomous drones.

    PubMed

    Floreano, Dario; Wood, Robert J

    2015-05-28

    We are witnessing the advent of a new era of robots - drones - that can autonomously fly in natural and man-made environments. These robots, often associated with defence applications, could have a major impact on civilian tasks, including transportation, communication, agriculture, disaster mitigation and environment preservation. Autonomous flight in confined spaces presents great scientific and technical challenges owing to the energetic cost of staying airborne and to the perceptual intelligence required to negotiate complex environments. We identify scientific and technological advances that are expected to translate, within appropriate regulatory frameworks, into pervasive use of autonomous drones for civilian applications.

  10. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  11. A design strategy for autonomous systems

    NASA Technical Reports Server (NTRS)

    Forster, Pete

    1989-01-01

    Some solutions to crucial issues regarding the competent performance of an autonomously operating robot are identified; namely, that of handling multiple and variable data sources containing overlapping information and maintaining coherent operation while responding adequately to changes in the environment. Support for the ideas developed for the construction of such behavior are extracted from speculations in the study of cognitive psychology, an understanding of the behavior of controlled mechanisms, and the development of behavior-based robots in a few robot research laboratories. The validity of these ideas is supported by some simple simulation experiments in the field of mobile robot navigation and guidance.

  12. Development of a Commercially Viable, Modular Autonomous Robotic Systems for Converting any Vehicle to Autonomous Control

    NASA Technical Reports Server (NTRS)

    Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.

    1994-01-01

    A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.

  13. Robotics in biomedical chromatography and electrophoresis.

    PubMed

    Fouda, H G

    1989-08-11

    The ideal laboratory robot can be viewed as "an indefatigable assistant capable of working continuously for 24 h a day with constant efficiency". The development of a system approaching that promise requires considerable skill and time commitment, a thorough understanding of the capabilities and limitations of the robot and its specialized modules and an intimate knowledge of the functions to be automated. The robot need not emulate every manual step. Effective substitutes for difficult steps must be devised. The future of laboratory robots depends not only on technological advances in other fields, but also on the skill and creativity of chromatographers and other scientists. The robot has been applied to automate numerous biomedical chromatography and electrophoresis methods. The quality of its data can approach, and in some cases exceed, that of manual methods. Maintaining high data quality during continuous operation requires frequent maintenance and validation. Well designed robotic systems can yield substantial increase in the laboratory productivity without a corresponding increase in manpower. They can free skilled personnel from mundane tasks and can enhance the safety of the laboratory environment. The integration of robotics, chromatography systems and laboratory information management systems permits full automation and affords opportunities for unattended method development and for future incorporation of artificial intelligence techniques and the evolution of expert systems. Finally, humanoid attributes aside, robotic utilization in the laboratory should not be an end in itself. The robot is a useful tool that should be utilized only when it is prudent and cost-effective to do so.

  14. A Behavior-Based Strategy for Single and Multi-Robot Autonomous Exploration

    PubMed Central

    Cepeda, Jesus S.; Chaimowicz, Luiz; Soto, Rogelio; Gordillo, José L.; Alanís-Reyes, Edén A.; Carrillo-Arce, Luis C.

    2012-01-01

    In this paper, we consider the problem of autonomous exploration of unknown environments with single and multiple robots. This is a challenging task, with several potential applications. We propose a simple yet effective approach that combines a behavior-based navigation with an efficient data structure to store previously visited regions. This allows robots to safely navigate, disperse and efficiently explore the environment. A series of experiments performed using a realistic robotic simulator and a real testbed scenario demonstrate that our technique effectively distributes the robots over the environment and allows them to quickly accomplish their mission in large open spaces, narrow cluttered environments, dead-end corridors, as well as rooms with minimum exits.

  15. Rice-obot 1: An intelligent autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R.; Ciscon, L.; Berberian, D.

    1989-01-01

    The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.

  16. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  17. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  18. Autonomous Evolution of Dynamic Gaits with Two Quadruped Robots

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Takamura, Seichi; Yamamoto, Takashi; Fujita, Masahiro

    2004-01-01

    A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10/min/min., which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.

  19. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  20. Humanoids Designed to do Work

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron

    2001-01-01

    NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.

  1. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The University of Waterloo Robotics Team, from Canada, prepares to place their robot on the start platform during the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  2. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-10

    The University of Waterloo Robotics Team, from Ontario, Canada, prepares their robot for the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team from the University of Waterloo is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  3. Learning tactile skills through curious exploration

    PubMed Central

    Pape, Leo; Oddo, Calogero M.; Controzzi, Marco; Cipriani, Christian; Förster, Alexander; Carrozza, Maria C.; Schmidhuber, Jürgen

    2012-01-01

    We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explicit teacher signal. In this approach, the acquisition of skills is driven by the information content of the sensory input signals relative to a learner that aims at representing sensory inputs using fewer and fewer computational resources. We show that, from initially random exploration of its environment, the robotic system autonomously develops a small set of basic motor skills that lead to different kinds of tactile input. Next, the system learns how to exploit the learned motor skills to solve supervised texture classification tasks. Our approach demonstrates the feasibility of autonomous acquisition of tactile skills on physical robotic platforms through curiosity-driven reinforcement learning, overcomes typical difficulties of engineered solutions for active tactile exploration and underactuated control, and provides a basis for studying developmental learning through intrinsic motivation in robots. PMID:22837748

  4. Referral of sensation to an advanced humanoid robotic hand prosthesis.

    PubMed

    Rosén, Birgitta; Ehrsson, H Henrik; Antfolk, Christian; Cipriani, Christian; Sebelius, Fredrik; Lundborg, Göran

    2009-01-01

    Hand prostheses that are currently available on the market are used by amputees to only a limited extent, partly because of lack of sensory feedback from the artificial hand. We report a pilot study that showed how amputees can experience a robot-like advanced hand prosthesis as part of their own body. We induced a perceptual illusion by which touch applied to the stump of the arm was experienced from the artificial hand. This illusion was elicited by applying synchronous tactile stimulation to the hidden amputation stump and the robotic hand prosthesis in full view. In five people who had had upper limb amputations this stimulation caused referral touch sensation from the stump to the artificial hand, and the prosthesis was experienced more like a real hand. We also showed that this illusion can work when the amputee controls the movements of the artificial hand by recordings of the arm muscle activity with electromyograms. These observations indicate that the previously described "rubber hand illusion" is also valid for an advanced hand prosthesis, even when it has a robotic-like appearance.

  5. Centaur: A Mobile Dexterous Humanoid for Surface Operations

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Ambrose, Robert O.; Goza, S. Michael; Junkin, Lucien; Neuhaus, Peter D.; Pratt, Jerry E.

    2005-01-01

    Future human and robotic planetary expeditions could benefit greatly from expanded Extra-Vehicular Activity (EVA) capabilities supporting a broad range of multiple, concurrent surface operations. Risky, expensive and complex, conventional EVAs are restricted in both duration and scope by consumables and available manpower, creating a resource management problem. A mobile, highly dexterous Extra-Vehicular Robotic (EVR) system called Centaur is proposed to cost-effectively augment human astronauts on surface excursions. The Centaur design combines a highly capable wheeled mobility platform with an anthropomorphic upper body mounted on a three degree-of-freedom waist. Able to use many ordinary handheld tools, the robot could conserve EVA hours by relieving humans of many routine inspection and maintenance chores and assisting them in more complex tasks, such as repairing other robots. As an astronaut surrogate, Centaur could take risks unacceptable to humans, respond more quickly to EVA emergencies and work much longer shifts. Though originally conceived as a system for planetary surface exploration, the Centaur concept could easily be adapted for terrestrial military applications such as de-Gig, surveillance and other hazardous duties.

  6. Reading sadness beyond human faces.

    PubMed

    Chammat, Mariam; Foucher, Aurélie; Nadel, Jacqueline; Dubal, Stéphanie

    2010-08-12

    Human faces are the main emotion displayers. Knowing that emotional compared to neutral stimuli elicit enlarged ERPs components at the perceptual level, one may wonder whether this has led to an emotional facilitation bias toward human faces. To contribute to this question, we measured the P1 and N170 components of the ERPs elicited by human facial compared to artificial stimuli, namely non-humanoid robots. Fifteen healthy young adults were shown sad and neutral, upright and inverted expressions of human versus robotic displays. An increase in P1 amplitude in response to sad displays compared to neutral ones evidenced an early perceptual amplification for sadness information. P1 and N170 latencies were delayed in response to robotic stimuli compared to human ones, while N170 amplitude was not affected by media. Inverted human stimuli elicited a longer latency of P1 and a larger N170 amplitude while inverted robotic stimuli did not. As a whole, our results show that emotion facilitation is not biased to human faces but rather extend to non-human displays, thus suggesting our capacity to read emotion beyond faces. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Where neuroscience and dynamic system theory meet autonomous robotics: a contracting basal ganglia model for action selection.

    PubMed

    Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J

    2008-05-01

    Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.

  8. Design of an autonomous exterior security robot

    NASA Technical Reports Server (NTRS)

    Myers, Scott D.

    1994-01-01

    This paper discusses the requirements and preliminary design of robotic vehicle designed for performing autonomous exterior perimeter security patrols around warehouse areas, ammunition supply depots, and industrial parks for the U.S. Department of Defense. The preliminary design allows for the operation of up to eight vehicles in a six kilometer by six kilometer zone with autonomous navigation and obstacle avoidance. In addition to detection of crawling intruders at 100 meters, the system must perform real-time inventory checking and database comparisons using a microwave tags system.

  9. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-10

    A team KuuKulgur Robot from Estonia is seen on the practice field during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team KuuKulgur is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  10. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    Sam Ortega, NASA program manager of Centennial Challenges, watches as robots attempt the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  11. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    The team Survey robot retrieves a sample during a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  12. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The team AERO robot drives off the starting platform during the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  13. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    Team Cephal's robot is seen on the starting platform during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  14. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Oregon State University Mars Rover Team's robot is seen during level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  15. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-10

    Jerry Waechter of team Middleman from Dunedin, Florida, works on their robot named Ro-Bear during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Tuesday, June 10, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Team Middleman is one of eighteen teams competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  16. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    A robot from the Intrepid Systems team is seen during the rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  17. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    A team KuuKulgur robot is seen as it begins the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  18. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The team Mountaineers robot is seen as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  19. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Members of the Oregon State University Mars Rover Team prepare their robot to attempt the level one competition at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  20. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Stellar Automation Systems team poses for a picture with their robot after attempting the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  1. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    The team Survey robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  2. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    All four of team KuuKulgur's robots are seen as they attempt the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  3. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    Spectators watch as the team Survey robot conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  4. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    Team Middleman's robot, Ro-Bear, is seen as it starts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  5. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    The team Mountaineers robot is seen after picking up the sample during a rerun of the level one challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  6. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-14

    Two of team KuuKulgur's robots are seen as they attempt a rerun of the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Saturday, June 14, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  7. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    Members of team Survey follow their robot as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  8. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    A team KuuKulgur robot approaches the sample as it attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  9. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    The team survey robot is seen on the starting platform before begging it's attempt at the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  10. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-11

    The Mountaineers team from West Virginia University, watches as their robot attempts the level one challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Wednesday, June 11, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  11. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    The team Survey robot is seen as it conducts a demonstration of the level two challenge at the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  12. 2014 NASA Centennial Challenges Sample Return Robot Challenge

    NASA Image and Video Library

    2014-06-12

    Team Survey's robot is seen as it conducts a demonstration of the level two challenge during the 2014 NASA Centennial Challenges Sample Return Robot Challenge, Thursday, June 12, 2014, at the Worcester Polytechnic Institute (WPI) in Worcester, Mass. Eighteen teams are competing for a $1.5 million NASA prize purse. Teams will be required to demonstrate autonomous robots that can locate and collect samples from a wide and varied terrain, operating without human control. The objective of this NASA-WPI Centennial Challenge is to encourage innovations in autonomous navigation and robotics technologies. Innovations stemming from the challenge may improve NASA's capability to explore a variety of destinations in space, as well as enhance the nation's robotic technology for use in industries and applications on Earth. Photo Credit: (NASA/Joel Kowsky)

  13. Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew

    2005-01-01

    We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.

  14. Autonomous Navigation, Dynamic Path and Work Flow Planning in Multi-Agent Robotic Swarms Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Leucht, Kurt; Stolleis, Karl

    2015-01-01

    Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots, called Swarmies, to be used as a ground-based research platform for in-situ resource utilization missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in an unknown environment and return those resources to a central site.

  15. Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.

    PubMed

    Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam

    2016-01-01

    An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.

  16. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  17. Open Issues in Evolutionary Robotics.

    PubMed

    Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.

  18. Autonomous Realtime Threat-Hunting Robot (ARTHR)

    ScienceCinema

    Idaho National Laboratory - David Bruemmer, Curtis Nielsen

    2017-12-09

    Idaho National Laboratory researchers developed an intelligent plug-and-play robot payload that transforms commercial robots into effective first responders for deadly chemical, radiological and explosive threats. To learn more, visit

  19. KSC-2012-4345

    NASA Image and Video Library

    2012-08-09

    CAPE CANAVERAL, Fla. – During a free-flight test of the Project Morpheus vehicle at the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida, the vehicle lifted off the ground and then experienced a hardware component failure, which prevented it from maintaining stable flight. No one was injured and the resulting fire was extinguished by Kennedy fire personnel. Engineers are looking into the test data and the agency will release information as it becomes available. Failures such as these were anticipated prior to the test, and are part of the development process for any complex spaceflight hardware. Testing of the prototype lander had been ongoing at NASA’s Johnson Space Center in Houston in preparation for its first free-flight test at Kennedy Space Center. Morpheus was manufactured and assembled at JSC and Armadillo Aerospace. Morpheus is large enough to carry 1,100 pounds of cargo to the moon – for example, a humanoid robot, a small rover, or a small laboratory to convert moon dust into oxygen. The primary focus of the test is to demonstrate an integrated propulsion and guidance, navigation and control system that can fly a lunar descent profile to exercise the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, safe landing sensors and closed-loop flight control. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA

  20. KSC-2012-4346

    NASA Image and Video Library

    2012-08-09

    CAPE CANAVERAL, Fla. – During a free-flight test of the Project Morpheus vehicle at the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida, the vehicle lifted off the ground and then experienced a hardware component failure, which prevented it from maintaining stable flight. No one was injured and the resulting fire was extinguished by Kennedy fire personnel. Engineers are looking into the test data and the agency will release information as it becomes available. Failures such as these were anticipated prior to the test, and are part of the development process for any complex spaceflight hardware. Testing of the prototype lander had been ongoing at NASA’s Johnson Space Center in Houston in preparation for its first free-flight test at Kennedy Space Center. Morpheus was manufactured and assembled at JSC and Armadillo Aerospace. Morpheus is large enough to carry 1,100 pounds of cargo to the moon – for example, a humanoid robot, a small rover, or a small laboratory to convert moon dust into oxygen. The primary focus of the test is to demonstrate an integrated propulsion and guidance, navigation and control system that can fly a lunar descent profile to exercise the Autonomous Landing and Hazard Avoidance Technology, or ALHAT, safe landing sensors and closed-loop flight control. For more information on Project Morpheus, visit http://morpheuslander.jsc.nasa.gov/. Photo credit: NASA

Top