Human-Robot Teams for Unknown and Uncertain Environments
NASA Technical Reports Server (NTRS)
Fong, Terry
2015-01-01
Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.
Socially intelligent robots: dimensions of human-robot interaction.
Dautenhahn, Kerstin
2007-04-29
Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.
Abubshait, Abdulaziz; Wiese, Eva
2017-01-01
Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.
Xu, Tian Linger; Zhang, Hui; Yu, Chen
2016-05-01
We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.
Analysis of human emotion in human-robot interaction
NASA Astrophysics Data System (ADS)
Blar, Noraidah; Jafar, Fairul Azni; Abdullah, Nurhidayu; Muhammad, Mohd Nazrin; Kassim, Anuar Muhamed
2015-05-01
There is vast application of robots in human's works such as in industry, hospital, etc. Therefore, it is believed that human and robot can have a good collaboration to achieve an optimum result of work. The objectives of this project is to analyze human-robot collaboration and to understand humans feeling (kansei factors) when dealing with robot that robot should adapt to understand the humans' feeling. Researches currently are exploring in the area of human-robot interaction with the intention to reduce problems that subsist in today's civilization. Study had found that to make a good interaction between human and robot, first it is need to understand the abilities of each. Kansei Engineering in robotic was used to undergo the project. The project experiments were held by distributing questionnaire to students and technician. After that, the questionnaire results were analyzed by using SPSS analysis. Results from the analysis shown that there are five feelings which significant to the human in the human-robot interaction; anxious, fatigue, relaxed, peaceful, and impressed.
The Human-Robot Interaction Operating System
NASA Technical Reports Server (NTRS)
Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda
2006-01-01
In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.
Alac, Morana; Movellan, Javier; Tanaka, Fumihide
2011-12-01
Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.
Sensing sociality in dogs: what may make an interactive robot social?
Lakatos, Gabriella; Janiak, Mariusz; Malek, Lukasz; Muszynski, Robert; Konok, Veronika; Tchon, Krzysztof; Miklósi, A
2014-03-01
This study investigated whether dogs would engage in social interactions with an unfamiliar robot, utilize the communicative signals it provides and to examine whether the level of sociality shown by the robot affects the dogs' performance. We hypothesized that dogs would react to the communicative signals of a robot more successfully if the robot showed interactive social behaviour in general (towards both humans and dogs) than if it behaved in a machinelike, asocial way. The experiment consisted of an interactive phase followed by a pointing session, both with a human and a robotic experimenter. In the interaction phase, dogs witnessed a 6-min interaction episode between the owner and a human experimenter and another 6-min interaction episode between the owner and the robot. Each interaction episode was followed by the pointing phase in which the human/robot experimenter indicated the location of hidden food by using pointing gestures (two-way choice test). The results showed that in the interaction phase, the dogs' behaviour towards the robot was affected by the differential exposure. Dogs spent more time staying near the robot experimenter as compared to the human experimenter, with this difference being even more pronounced when the robot behaved socially. Similarly, dogs spent more time gazing at the head of the robot experimenter when the situation was social. Dogs achieved a significantly lower level of performance (finding the hidden food) with the pointing robot than with the pointing human; however, separate analysis of the robot sessions suggested that gestures of the socially behaving robot were easier for the dogs to comprehend than gestures of the asocially behaving robot. Thus, the level of sociality shown by the robot was not enough to elicit the same set of social behaviours from the dogs as was possible with humans, although sociality had a positive effect on dog-robot interactions.
Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.
Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J
2017-07-01
Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.
Modeling Leadership Styles in Human-Robot Team Dynamics
NASA Technical Reports Server (NTRS)
Cruz, Gerardo E.
2005-01-01
The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.
Can Robots and Humans Get Along?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2007-06-01
Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.
Smooth leader or sharp follower? Playing the mirror game with a robot.
Kashi, Shir; Levy-Tzedek, Shelly
2018-01-01
The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.
Trust and Trustworthiness in Human-Robot Interaction: A Formal Conceptualization
2016-05-11
AFRL-AFOSR-VA-TR-2016-0198 Trust and Trustworthiness in Human- Robot Interaction: A formal conceptualization Alan Wagner GEORGIA TECH APPLIED RESEARCH...27/2013-03/31/2016 4. TITLE AND SUBTITLE Trust and Trustworthiness in Human- Robot Interaction: A formal conceptualization 5a. CONTRACT NUMBER 5b...evaluated algorithms for characterizing trust during interactions between a robot and a human and employed strategies for repairing trust during emergency
Emotion attribution to a non-humanoid robot in different social situations.
Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám
2014-01-01
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour ("happiness" and "fear"), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.
Smooth leader or sharp follower? Playing the mirror game with a robot
Kashi, Shir; Levy-Tzedek, Shelly
2017-01-01
Background: The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. Objective: We set out to test people’s preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Methods: Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. Results: The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. Conclusion: The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions. PMID:29036853
Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed
2014-01-01
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.
Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed
2014-01-01
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554
Złotowski, Jakub A.; Sumioka, Hidenobu; Nishio, Shuichi; Glas, Dylan F.; Bartneck, Christoph; Ishiguro, Hiroshi
2015-01-01
The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what, if any, impact an uncanny-looking robot will have in the context of an interaction. In this paper we describe an exploratory empirical study using a live interaction paradigm that involved repeated interactions with robots that differed in embodiment and their attitude toward a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, merely repeating interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon. PMID:26175702
Złotowski, Jakub A; Sumioka, Hidenobu; Nishio, Shuichi; Glas, Dylan F; Bartneck, Christoph; Ishiguro, Hiroshi
2015-01-01
The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what, if any, impact an uncanny-looking robot will have in the context of an interaction. In this paper we describe an exploratory empirical study using a live interaction paradigm that involved repeated interactions with robots that differed in embodiment and their attitude toward a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, merely repeating interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.
Eyeblink Synchrony in Multimodal Human-Android Interaction.
Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro
2016-12-23
As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.
Human guidance of mobile robots in complex 3D environments using smart glasses
NASA Astrophysics Data System (ADS)
Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel
2016-05-01
In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.
Emotion Attribution to a Non-Humanoid Robot in Different Social Situations
Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám
2014-01-01
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot. PMID:25551218
A Preliminary Study of Peer-to-Peer Human-Robot Interaction
NASA Technical Reports Server (NTRS)
Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert
2006-01-01
The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.
On the Utilization of Social Animals as a Model for Social Robotics
Miklósi, Ádám; Gácsi, Márta
2012-01-01
Social robotics is a thriving field in building artificial agents. The possibility to construct agents that can engage in meaningful social interaction with humans presents new challenges for engineers. In general, social robotics has been inspired primarily by psychologists with the aim of building human-like robots. Only a small subcategory of “companion robots” (also referred to as robotic pets) was built to mimic animals. In this opinion essay we argue that all social robots should be seen as companions and more conceptual emphasis should be put on the inter-specific interaction between humans and social robots. This view is underlined by the means of an ethological analysis and critical evaluation of present day companion robots. We suggest that human–animal interaction provides a rich source of knowledge for designing social robots that are able to interact with humans under a wide range of conditions. PMID:22457658
Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.
Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi
2018-03-26
For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.
Human-Robot Interaction: Status and Challenges.
Sheridan, Thomas B
2016-06-01
The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction
XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN
2016-01-01
We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875
Jiang, Zhongliang; Sun, Yu; Gao, Peng; Hu, Ying; Zhang, Jianwei
2016-01-01
Robots play more important roles in daily life and bring us a lot of convenience. But when people work with robots, there remain some significant differences in human-human interactions and human-robot interaction. It is our goal to make robots look even more human-like. We design a controller which can sense the force acting on any point of a robot and ensure the robot can move according to the force. First, a spring-mass-dashpot system was used to describe the physical model, and the second-order system is the kernel of the controller. Then, we can establish the state space equations of the system. In addition, the particle swarm optimization algorithm had been used to obtain the system parameters. In order to test the stability of system, the root-locus diagram had been shown in the paper. Ultimately, some experiments had been carried out on the robotic spinal surgery system, which is developed by our team, and the result shows that the new controller performs better during human-robot interaction.
Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin
2013-01-01
As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.
Eyeblink Synchrony in Multimodal Human-Android Interaction
Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro
2016-01-01
As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human’s attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners’ eyeblinks were entrained to android speakers’ eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android’s hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions. PMID:28009014
Velocity-curvature patterns limit human-robot physical interaction
Maurice, Pauline; Huber, Meghan E.; Hogan, Neville; Sternad, Dagmar
2018-01-01
Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration. PMID:29744380
Velocity-curvature patterns limit human-robot physical interaction.
Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar
2018-01-01
Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.
Loving Machines: Theorizing Human and Sociable-Technology Interaction
NASA Astrophysics Data System (ADS)
Shaw-Garlock, Glenda
Today, human and sociable-technology interaction is a contested site of inquiry. Some regard social robots as an innovative medium of communication that offer new avenues for expression, communication, and interaction. Other others question the moral veracity of human-robot relationships, suggesting that such associations risk psychological impoverishment. What seems clear is that the emergence of social robots in everyday life will alter the nature of social interaction, bringing with it a need for new theories to understand the shifting terrain between humans and machines. This work provides a historical context for human and sociable robot interaction. Current research related to human-sociable-technology interaction is considered in relation to arguments that confront a humanist view that confine 'technological things' to the nonhuman side of the human/nonhuman binary relation. Finally, it recommends a theoretical approach for the study of human and sociable-technology interaction that accommodates increasingly personal relations between human and nonhuman technologies.
2016-05-01
research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic
Toward a framework for levels of robot autonomy in human-robot interaction.
Beer, Jenay M; Fisk, Arthur D; Rogers, Wendy A
2014-07-01
A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence - and are influenced by - robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot's autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA.
A Mobile, Map-Based Tasking Interface for Human-Robot Interaction
2010-12-01
A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2
Social robots as embedded reinforcers of social behavior in children with autism.
Kim, Elizabeth S; Berkovits, Lauren D; Bernier, Emily P; Leyzberg, Dan; Shic, Frederick; Paul, Rhea; Scassellati, Brian
2013-05-01
In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.
Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien
2018-02-01
Previous studies showed the existence of implicit interaction rules shared by human walkers when crossing each other. Especially, each walker contributes to the collision avoidance task and the crossing order, as set at the beginning, is preserved along the interaction. This order determines the adaptation strategy: the first arrived increases his/her advance by slightly accelerating and changing his/her heading, whereas the second one slows down and moves in the opposite direction. In this study, we analyzed the behavior of human walkers crossing the trajectory of a mobile robot that was programmed to reproduce this human avoidance strategy. In contrast with a previous study, which showed that humans mostly prefer to give the way to a non-reactive robot, we observed similar behaviors between human-human avoidance and human-robot avoidance when the robot replicates the human interaction rules. We discuss this result in relation with the importance of controlling robots in a human-like way in order to ease their cohabitation with humans. Copyright © 2017 Elsevier B.V. All rights reserved.
Peer-to-Peer Human-Robot Interaction for Space Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Nourbakhsh, Illah
2004-01-01
NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration. To support this mission, we are working to improve human-robot interaction and performance on planetary surfaces. Rather than building robots that function as glorified tools, our focus is to enable humans and robots to work as partners and peers. In this paper. we describe our approach, which includes contextual dialogue, cognitive modeling, and metrics-based field testing.
Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.
Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka
2017-01-01
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Interactions With Robots: The Truths We Reveal About Ourselves.
Broadbent, Elizabeth
2017-01-03
In movies, robots are often extremely humanlike. Although these robots are not yet reality, robots are currently being used in healthcare, education, and business. Robots provide benefits such as relieving loneliness and enabling communication. Engineers are trying to build robots that look and behave like humans and thus need comprehensive knowledge not only of technology but also of human cognition, emotion, and behavior. This need is driving engineers to study human behavior toward other humans and toward robots, leading to greater understanding of how humans think, feel, and behave in these contexts, including our tendencies for mindless social behaviors, anthropomorphism, uncanny feelings toward robots, and the formation of emotional attachments. However, in considering the increased use of robots, many people have concerns about deception, privacy, job loss, safety, and the loss of human relationships. Human-robot interaction is a fascinating field and one in which psychologists have much to contribute, both to the development of robots and to the study of human behavior.
User Localization During Human-Robot Interaction
Alonso-Martín, F.; Gorostiza, Javi F.; Malfaz, María; Salichs, Miguel A.
2012-01-01
This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented. PMID:23012577
User localization during human-robot interaction.
Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A
2012-01-01
This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.
Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank
2017-12-14
Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.
Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed.
Reuten, Anne; van Dam, Maureen; Naber, Marnix
2018-01-01
Physiological responses during human-robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.
Liang, Yuhua Jake; Lee, Seungcheol Austin
2016-09-01
Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.
Toward a framework for levels of robot autonomy in human-robot interaction
Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.
2017-01-01
A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence – and are influenced by – robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot’s autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA. PMID:29082107
New Paradigms for Human-Robotic Collaboration During Human Planetary Exploration
NASA Astrophysics Data System (ADS)
Parrish, J. C.; Beaty, D. W.; Bleacher, J. E.
2017-02-01
Human exploration missions to other planetary bodies offer new paradigms for collaboration (control, interaction) between humans and robots beyond the methods currently used to control robots from Earth and robots in Earth orbit.
When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions
ERIC Educational Resources Information Center
Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman
2012-01-01
In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…
Anthropomorphic Robot Design and User Interaction Associated with Motion
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2016-01-01
Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement.
Embodied cognition for autonomous interactive robots.
Hoffman, Guy
2012-10-01
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.
Do infants perceive the social robot Keepon as a communicative partner?
Peca, Andreea; Simut, Ramona; Cao, Hoang-Long; Vanderborght, Bram
2016-02-01
This study investigates if infants perceive an unfamiliar agent, such as the robot Keepon, as a social agent after observing an interaction between the robot and a human adult. 23 infants, aged 9-17 month, were exposed, in a first phase, to either a contingent interaction between the active robot and an active human adult, or to an interaction between an active human adult and the non-active robot, followed by a second phase, in which infants were offered the opportunity to initiate a turn-taking interaction with Keepon. The measured variables were: (1) the number of social initiations the infant directed toward the robot, and (2) the number of anticipatory orientations of attention to the agent that follows in the conversation. The results indicate a significant higher level of initiations in the interactive robot condition compared to the non-active robot condition, while the difference between the frequencies of anticipations of turn-taking behaviors was not significant. Copyright © 2015 Elsevier Inc. All rights reserved.
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains
2010-12-01
In In Proceedings of the 1996 Symposium on Human Interaction and Complex Systems, pages 276–283, 1996. 6.1 [15] Colin Campbell and Kristin P. Bennett...ISBN 0-262-19450-3. 5.1 [104] Jean Scholtz, Jeff Young, Jill L. Drury , and Holly A. Yanco. Evaluation of human-robot interaction awareness in search...2004. 6.1 [147] Holly A. Yanco and Jill L. Drury . Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI robot rescue
Interactive Exploration Robots: Human-Robotic Collaboration and Interactions
NASA Technical Reports Server (NTRS)
Fong, Terry
2017-01-01
For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed
Reuten, Anne; van Dam, Maureen; Naber, Marnix
2018-01-01
Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses. PMID:29875722
Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.
Hongbo Wang; Kosuge, K
2012-01-01
Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.
How do walkers avoid a mobile robot crossing their way?
Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien
2017-01-01
Robots and Humans have to share the same environment more and more often. In the aim of steering robots in a safe and convenient manner among humans it is required to understand how humans interact with them. This work focuses on collision avoidance between a human and a robot during locomotion. Having in mind previous results on human obstacle avoidance, as well as the description of the main principles which guide collision avoidance strategies, we observe how humans adapt a goal-directed locomotion task when they have to interfere with a mobile robot. Our results show differences in the strategy set by humans to avoid a robot in comparison with avoiding another human. Humans prefer to give the way to the robot even when they are likely to pass first at the beginning of the interaction. Copyright © 2016 Elsevier B.V. All rights reserved.
Intelligence for Human-Assistant Planetary Surface Robots
NASA Technical Reports Server (NTRS)
Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.
2006-01-01
The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.
Simut, Ramona E; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram
2016-01-01
Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child.
Metaphors to Drive By: Exploring New Ways to Guide Human-Robot Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
David J. Bruemmer; David I. Gertman; Curtis W. Nielsen
2007-08-01
Autonomous behaviors created by the research and development community are not being extensively utilized within energy, defense, security, or industrial contexts. This paper provides evidence that the interaction methods used alongside these behaviors may not provide a mental model that can be easily adopted or used by operators. Although autonomy has the potential to reduce overall workload, the use of robot behaviors often increased the complexity of the underlying interaction metaphor. This paper reports our development of new metaphors that support increased robot complexity without passing the complexity of the interaction onto the operator. Furthermore, we illustrate how recognition ofmore » problems in human-robot interactions can drive the creation of new metaphors for design and how human factors lessons in usability, human performance, and our social contract with technology have the potential for enormous payoff in terms of establishing effective, user-friendly robot systems when appropriate metaphors are used.« less
A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.
Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash
2017-07-01
This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.
Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI
Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo
2008-01-01
Background When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. Methodology/Principal Findings By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer
Can machines think? Interaction and perspective taking with robots investigated via fMRI.
Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo
2008-07-09
When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer
A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.
Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo
2017-01-01
Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.
Honig, Shanee; Oron-Gilad, Tal
2018-01-01
While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI), and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP), that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1) communicating failures, (2) perception and comprehension of failures, and (3) solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a tool to promote the development of user-centered failure-handling strategies for HRIs.
Thepsoonthorn, Chidchanok; Ogawa, Ken-Ichiro; Miyake, Yoshihiro
2018-05-30
At current state, although robotics technology has been immensely developed, the uncertainty to completely engage in human-robot interaction is still growing among people. Many current studies then started to concern about human factors that might influence human's likability like human's personality, and found that compatibility between human's and robot's personality (expressions of personality characteristics) can enhance human's likability. However, it is still unclear whether specific means and strategy of robot's nonverbal behaviours enhances likability from human with different personality traits and whether there is a relationship between robot's nonverbal behaviours and human's likability based on human's personality. In this study, we investigated and focused on the interaction via gaze and head nodding behaviours (mutual gaze convergence and head nodding synchrony) between introvert/extravert participants and robot in two communication strategies (Backchanneling and Turn-taking). Our findings reveal that the introvert participants are positively affected by backchanneling in robot's head nodding behaviour, which results in substantial head nodding synchrony whereas the extravert participants are positively influenced by turn-taking in gaze behaviour, which leads to significant mutual gaze convergence. This study demonstrates that there is a relationship between robot's nonverbal behaviour and human's likability based on human's personality.
Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social
Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka
2017-01-01
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. PMID:29046651
Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior.
Ficocelli, Maurizio; Terao, Junichi; Nejat, Goldie
2016-12-01
The objective of a socially assistive robot is to create a close and effective interaction with a human user for the purpose of giving assistance. In particular, the social interaction, guidance, and support that a socially assistive robot can provide a person can be very beneficial to patient-centered care. However, there are a number of research issues that need to be addressed in order to design such robots. This paper focuses on developing effective emotion-based assistive behavior for a socially assistive robot intended for natural human-robot interaction (HRI) scenarios with explicit social and assistive task functionalities. In particular, in this paper, a unique emotional behavior module is presented and implemented in a learning-based control architecture for assistive HRI. The module is utilized to determine the appropriate emotions of the robot to display, as motivated by the well-being of the person, during assistive task-driven interactions in order to elicit suitable actions from users to accomplish a given person-centered assistive task. A novel online updating technique is used in order to allow the emotional model to adapt to new people and scenarios. Experiments presented show the effectiveness of utilizing robotic emotional assistive behavior during HRI scenarios.
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266
Fundamentals of soft robot locomotion.
Calisti, M; Picardi, G; Laschi, C
2017-05-01
Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).
Interacting With Robots to Investigate the Bases of Social Interaction.
Sciutti, Alessandra; Sandini, Giulio
2017-12-01
Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.
Rehabilitation exoskeletal robotics. The promise of an emerging field.
Pons, José L
2010-01-01
Exoskeletons are wearable robots exhibiting a close cognitive and physical interaction with the human user. These are rigid robotic exoskeletal structures that typically operate alongside human limbs. Scientific and technological work on exoskeletons began in the early 1960s but have only recently been applied to rehabilitation and functional substitution in patients suffering from motor disorders. Key topics for further development of exoskeletons in rehabilitation scenarios include the need for robust human-robot multimodal cognitive interaction, safe and dependable physical interaction, true wearability and portability, and user aspects such as acceptance and usability. This discussion provides an overview of these aspects and draws conclusions regarding potential future research directions in robotic exoskeletons.
Human motion behavior while interacting with an industrial robot.
Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus
2012-01-01
Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.
Singularity now: using the ventricular assist device as a model for future human-robotic physiology.
Martin, Archer K
2016-04-01
In our 21 st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today's world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named "IshBot") can best be studied in ventricular assist devices - VAD.
Singularity now: using the ventricular assist device as a model for future human-robotic physiology
Martin, Archer K.
2016-01-01
In our 21st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today’s world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named “IshBot”) can best be studied in ventricular assist devices – VAD. PMID:28913480
Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface
De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara
2011-01-01
A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574
NASA Astrophysics Data System (ADS)
Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.
1997-12-01
This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.
NASA Technical Reports Server (NTRS)
Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer
2011-01-01
Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin
2013-01-01
As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434
McColl, Derek; Jiang, Chuan; Nejat, Goldie
2017-02-01
For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.
HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.
Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael
2017-07-01
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Syrdal, Dag Sverre; Dautenhahn, Kerstin; Koay, Kheng Lee; Ho, Wan Ching
2014-01-01
This article describes the prototyping of human-robot interactions in the University of Hertfordshire (UH) Robot House. Twelve participants took part in a long-term study in which they interacted with robots in the UH Robot House once a week for a period of 10 weeks. A prototyping method using the narrative framing technique allowed participants to engage with the robots in episodic interactions that were framed using narrative to convey the impression of a continuous long-term interaction. The goal was to examine how participants responded to the scenarios and the robots as well as specific robot behaviours, such as agent migration and expressive behaviours. Evaluation of the robots and the scenarios were elicited using several measures, including the standardised System Usability Scale, an ad hoc Scenario Acceptance Scale, as well as single-item Likert scales, open-ended questionnaire items and a debriefing interview. Results suggest that participants felt that the use of this prototyping technique allowed them insight into the use of the robot, and that they accepted the use of the robot within the scenario.
Robot therapy: a new approach for mental healthcare of the elderly - a mini-review.
Shibata, Takanori; Wada, Kazuyoshi
2011-01-01
Mental healthcare of elderly people is a common problem in advanced countries. Recently, high technology has developed robots for use not only in factories but also for our living environment. In particular, human-interactive robots for psychological enrichment, which provide services by interacting with humans while stimulating their minds, are rapidly spreading. Such robots not only simply entertain but also render assistance, guide, provide therapy, educate, enable communication, and so on. Robot therapy, which uses robots as a substitution for animals in animal-assisted therapy and activity, is a new application of robots and is attracting the attention of many researchers and psychologists. The seal robot named Paro was developed especially for robot therapy and was used at hospitals and facilities for elderly people in several countries. Recent research has revealed that robot therapy has the same effects on people as animal therapy. In addition, it is being recognized as a new method of mental healthcare for elderly people. In this mini review, we introduce the merits and demerits of animal therapy. Then we explain the human-interactive robot for psychological enrichment, the required functions for therapeutic robots, and the seal robot. Finally, we provide examples of robot therapy for elderly people, including dementia patients. Copyright © 2010 S. Karger AG, Basel.
Model-based safety analysis of human-robot interactions: the MIRAS walking assistance robot.
Guiochet, Jérémie; Hoang, Quynh Anh Do; Kaaniche, Mohamed; Powell, David
2013-06-01
Robotic systems have to cope with various execution environments while guaranteeing safety, and in particular when they interact with humans during rehabilitation tasks. These systems are often critical since their failure can lead to human injury or even death. However, such systems are difficult to validate due to their high complexity and the fact that they operate within complex, variable and uncertain environments (including users), in which it is difficult to foresee all possible system behaviors. Because of the complexity of human-robot interactions, rigorous and systematic approaches are needed to assist the developers in the identification of significant threats and the implementation of efficient protection mechanisms, and in the elaboration of a sound argumentation to justify the level of safety that can be achieved by the system. For threat identification, we propose a method called HAZOP-UML based on a risk analysis technique adapted to system description models, focusing on human-robot interaction models. The output of this step is then injected in a structured safety argumentation using the GSN graphical notation. Those approaches have been successfully applied to the development of a walking assistant robot which is now in clinical validation.
Prakash, Akanksha; Rogers, Wendy A
2015-04-01
Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko
2011-01-01
Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed studies, participants performance during teleoperation of a robot arm will be compared when they are provided with command-guidance symbology (that is, directing the operator what commands to make) or situation-guidance symbology (that is, providing natural cues so that the operator can infer what commands to make). The second factor for AR symbology is the effects of overlays that are either superimposed or integrated into the external view of the world. A study is proposed in which the effects of superimposed and integrated overlays on operator task performance during teleoperated driving tasks are compared
NASA Astrophysics Data System (ADS)
Bharatharaj, Jaishankar; Huang, Loulin; Al-Jumaily, Ahmed; Elara, Mohan Rajesh; Krägeloh, Chris
2017-09-01
Therapeutic pet robots designed to help humans with various medical conditions could play a vital role in physiological, psychological and social-interaction interventions for children with autism spectrum disorder (ASD). In this paper, we report our findings from a robot-assisted therapeutic study conducted over seven weeks to investigate the changes in stress levels of children with ASD. For this study, we used the parrot-inspired therapeutic robot, KiliRo, we developed and investigated urinary and salivary samples of participating children to report changes in stress levels before and after interacting with the robot. This is a pioneering human-robot interaction study to investigate the effects of robot-assisted therapy using salivary samples. The results show that the bio-inspired robot-assisted therapy can significantly help reduce the stress levels of children with ASD.
Anthropomorphism in Human–Robot Co-evolution
Damiano, Luisa; Dumouchel, Paul
2018-01-01
Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth. PMID:29632507
NASA Astrophysics Data System (ADS)
Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon
2016-07-01
With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.
Rogers, Wendy A.
2015-01-01
Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.
A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics
Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D.; Bianchi, Matteo
2017-01-01
Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions. PMID:28588473
Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D
2017-12-01
The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Children’s Imaginaries of Human-Robot Interaction in Healthcare
2018-01-01
This paper analyzes children’s imaginaries of Human-Robots Interaction (HRI) in the context of social robots in healthcare, and it explores ethical and social issues when designing a social robot for a children’s hospital. Based on approaches that emphasize the reciprocal relationship between society and technology, the analytical force of imaginaries lies in their capacity to be embedded in practices and interactions as well as to affect the construction and applications of surrounding technologies. The study is based on a participatory process carried out with six-year-old children for the design of a robot. Imaginaries of HRI are analyzed from a care-centered approach focusing on children’s values and practices as related to their representation of care. The conceptualization of HRI as an assemblage of interactions, the prospective bidirectional care relationships with robots, and the engagement with the robot as an entity of multiple potential robots are the major findings of this study. The study shows the potential of studying imaginaries of HRI, and it concludes that their integration in the final design of robots is a way of including ethical values in it. PMID:29757221
Children's Imaginaries of Human-Robot Interaction in Healthcare.
Vallès-Peris, Núria; Angulo, Cecilio; Domènech, Miquel
2018-05-12
This paper analyzes children’s imaginaries of Human-Robots Interaction (HRI) in the context of social robots in healthcare, and it explores ethical and social issues when designing a social robot for a children’s hospital. Based on approaches that emphasize the reciprocal relationship between society and technology, the analytical force of imaginaries lies in their capacity to be embedded in practices and interactions as well as to affect the construction and applications of surrounding technologies. The study is based on a participatory process carried out with six-year-old children for the design of a robot. Imaginaries of HRI are analyzed from a care-centered approach focusing on children’s values and practices as related to their representation of care. The conceptualization of HRI as an assemblage of interactions, the prospective bidirectional care relationships with robots, and the engagement with the robot as an entity of multiple potential robots are the major findings of this study. The study shows the potential of studying imaginaries of HRI, and it concludes that their integration in the final design of robots is a way of including ethical values in it.
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.
Mathur, Maya B; Reichling, David B
2016-01-01
Android robots are entering human social life. However, human-robot interactions may be complicated by a hypothetical Uncanny Valley (UV) in which imperfect human-likeness provokes dislike. Previous investigations using unnaturally blended images reported inconsistent UV effects. We demonstrate an UV in subjects' explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces. An "investment game" showed that the UV penetrated even more deeply to influence subjects' implicit decisions concerning robots' social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans. Preliminary evidence suggests category confusion may occur in the UV but does not mediate the likability effect. These findings suggest that while classic elements of human social psychology govern human-robot social interaction, robust UV effects pose a formidable android-specific problem. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A
2014-05-01
Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.
Ghost-in-the-Machine reveals human social signals for human-robot interaction.
Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P
2015-01-01
We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.
The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.
Arnold, Thomas; Scheutz, Matthias
2017-06-01
Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.
d'Elia, Nicolò; Vanetti, Federica; Cempini, Marco; Pasquini, Guido; Parri, Andrea; Rabuffetti, Marco; Ferrarin, Maurizio; Molino Lova, Raffaele; Vitiello, Nicola
2017-04-14
In human-centered robotics, exoskeletons are becoming relevant for addressing needs in the healthcare and industrial domains. Owing to their close interaction with the user, the safety and ergonomics of these systems are critical design features that require systematic evaluation methodologies. Proper transfer of mechanical power requires optimal tuning of the kinematic coupling between the robotic and anatomical joint rotation axes. We present the methods and results of an experimental evaluation of the physical interaction with an active pelvis orthosis (APO). This device was designed to effectively assist in hip flexion-extension during locomotion with a minimum impact on the physiological human kinematics, owing to a set of passive degrees of freedom for self-alignment of the human and robotic hip flexion-extension axes. Five healthy volunteers walked on a treadmill at different speeds without and with the APO under different levels of assistance. The user-APO physical interaction was evaluated in terms of: (i) the deviation of human lower-limb joint kinematics when wearing the APO with respect to the physiological behavior (i.e., without the APO); (ii) relative displacements between the APO orthotic shells and the corresponding body segments; and (iii) the discrepancy between the kinematics of the APO and the wearer's hip joints. The results show: (i) negligible interference of the APO in human kinematics under all the experimented conditions; (ii) small (i.e., < 1 cm) relative displacements between the APO cuffs and the corresponding body segments (called stability); and (iii) significant increment in the human-robot kinematics discrepancy at the hip flexion-extension joint associated with speed and assistance level increase. APO mechanics and actuation have negligible interference in human locomotion. Human kinematics was not affected by the APO under all tested conditions. In addition, under all tested conditions, there was no relevant relative displacement between the orthotic cuffs and the corresponding anatomical segments. Hence, the physical human-robot coupling is reliable. These facts prove that the adopted mechanical design of passive degrees of freedom allows an effective human-robot kinematic coupling. We believe that this analysis may be useful for the definition of evaluation metrics for the ergonomics assessment of wearable robots.
NASA Astrophysics Data System (ADS)
See, Swee Lan; Tan, Mitchell; Looi, Qin En
This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.
Regulation and Entrainment in Human-Robot Interaction
2000-01-01
applications for domestic, health care related, or entertainment based robots motivate the development of robots that can socially interact with, learn...picture shows WE-3RII, an expressive face robot developed at Waseda University. The middle right picture shows Robita, an upper-torso robot also... developed at Waseda University to track speaking turns. The far right picture shows our expressive robot, Kismet, developed at MIT. The two leftmost photos
AIonAI: a humanitarian law of artificial intelligence and robotics.
Ashrafian, Hutan
2015-02-01
The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human-robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot-robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation.
Soft-rigid interaction mechanism towards a lobster-inspired hybrid actuator
NASA Astrophysics Data System (ADS)
Chen, Yaohui; Wan, Fang; Wu, Tong; Song, Chaoyang
2018-01-01
Soft pneumatic actuators (SPAs) are intrinsically light-weight, compliant and therefore ideal to directly interact with humans and be implemented into wearable robotic devices. However, they also pose new challenges in describing and sensing their continuous deformation. In this paper, we propose a hybrid actuator design with bio-inspirations from the lobsters, which can generate reconfigurable bending movements through the internal soft chamber interacting with the external rigid shells. This design with joint and link structures enables us to exactly track its bending configurations that previously posed a significant challenge to soft robots. Analytic models are developed to illustrate the soft-rigid interaction mechanism with experimental validation. A robotic glove using hybrid actuators to assist grasping is assembled to illustrate their potentials in safe human-robot interactions. Considering all the design merits, our work presents a practical approach to the design of next-generation robots capable of achieving both good accuracy and compliance.
We perceive a mind in a robot when we help it
Hashimoto, Takaaki; Karasawa, Kaori
2017-01-01
People sometimes perceive a mind in inorganic entities like robots. Psychological research has shown that mind perception correlates with moral judgments and that immoral behaviors (i.e., intentional harm) facilitate mind perception toward otherwise mindless victims. We conducted a vignette experiment (N = 129; Mage = 21.8 ± 6.0 years) concerning human-robot interactions and extended previous research’s results in two ways. First, mind perception toward the robot was facilitated when it received a benevolent behavior, although only when participants took the perspective of an actor. Second, imagining a benevolent interaction led to more positive attitudes toward the robot, and this effect was mediated by mind perception. These results help predict what people’s reactions in future human-robot interactions would be like, and have implications for how to design future social rules about the treatment of robots. PMID:28727735
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
NASA Astrophysics Data System (ADS)
Thomaz, Andrea; Breazeal, Cynthia
2008-06-01
We present a learning system, socially guided exploration, in which a social robot learns new tasks through a combination of self-exploration and social interaction. The system's motivational drives, along with social scaffolding from a human partner, bias behaviour to create learning opportunities for a hierarchical reinforcement learning mechanism. The robot is able to learn on its own, but can flexibly take advantage of the guidance of a human teacher. We report the results of an experiment that analyses what the robot learns on its own as compared to being taught by human subjects. We also analyse the video of these interactions to understand human teaching behaviour and the social dynamics of the human-teacher/robot-learner system. With respect to learning performance, human guidance results in a task set that is significantly more focused and efficient at the tasks the human was trying to teach, whereas self-exploration results in a more diverse set. Analysis of human teaching behaviour reveals insights of social coupling between the human teacher and robot learner, different teaching styles, strong consistency in the kinds and frequency of scaffolding acts across teachers and nuances in the communicative intent behind positive and negative feedback.
Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action
Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra
2014-01-01
Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212
A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction
2011-10-01
directly affects the willingness of people to accept robot -produced information, follow robots ’ suggestions, and thus benefit from the advantages inherent...perceived complexity of operation). Consequently, if the perceived risk of using the robot exceeds its perceived benefit , practical operators almost...necessary presence of a human caregiver (Graf, Hans, & Schraft, 2004). Other robotic devices, such as wheelchairs (Yanco, 2001) and exoskeletons (e.g
Guerrero, Carlos Rodriguez; Fraile Marinero, Juan Carlos; Turiel, Javier Perez; Muñoz, Victor
2013-11-01
Human motor performance, speed and variability are highly susceptible to emotional states. This paper reviews the impact of the emotions on the motor control performance, and studies the possibility of improving the perceived skill/challenge relation on a multimodal neural rehabilitation scenario, by means of a biocybernetic controller that modulates the assistance provided by a haptic controlled robot in reaction to undesirable physical and mental states. Results from psychophysiological, performance and self assessment data for closed loop experiments in contrast with their open loop counterparts, suggest that the proposed method had a positive impact on the overall challenge/skill relation leading to an enhanced physical human-robot interaction experience. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Pragmatic Frames for Teaching and Learning in Human-Robot Interaction: Review and Challenges.
Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J; Oudeyer, Pierre-Yves
2016-01-01
One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching.
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
Wood, Luke Jai; Dautenhahn, Kerstin; Rainer, Austen; Robins, Ben; Lehmann, Hagen; Syrdal, Dag Sverre
2013-01-01
Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications. PMID:23533625
Cognitive and sociocultural aspects of robotized technology: innovative processes of adaptation
NASA Astrophysics Data System (ADS)
Kvesko, S. B.; Kvesko, B. B.; Kornienko, M. A.; Nikitina, Y. A.; Pankova, N. M.
2018-05-01
The paper dwells upon interaction between socio-cultural phenomena and cognitive characteristics of robotized technology. The interdisciplinary approach was employed in order to cast light on the manifold and multilevel identity of scientific advance in terms of robotized technology within the mental realm. Analyzing robotized technology from the viewpoint of its significance for the modern society is one of the upcoming trends in the contemporary scientific realm. The robots under production are capable of interacting with people; this results in a growing necessity for the studies on social status of robotized technological items. Socio-cultural aspect of cognitive robotized technology is reflected in the fact that the nature becomes ‘aware’ of itself via human brain, a human being tends to strives for perfection in their intellectual and moral dimensions.
A Robotic Coach Architecture for Elder Care (ROCARE) Based on Multi-user Engagement Models
Fan, Jing; Bian, Dayi; Zheng, Zhi; Beuscher, Linda; Newhouse, Paul A.; Mion, Lorraine C.; Sarkar, Nilanjan
2017-01-01
The aging population with its concomitant medical conditions, physical and cognitive impairments, at a time of strained resources, establishes the urgent need to explore advanced technologies that may enhance function and quality of life. Recently, robotic technology, especially socially assistive robotics has been investigated to address the physical, cognitive, and social needs of older adults. Most system to date have predominantly focused on one-on-one human robot interaction (HRI). In this paper, we present a multi-user engagement-based robotic coach system architecture (ROCARE). ROCARE is capable of administering both one-on-one and multi-user HRI, providing implicit and explicit channels of communication, and individualized activity management for long-term engagement. Two preliminary feasibility studies, a one-on-one interaction and a triadic interaction with two humans and a robot, were conducted and the results indicated potential usefulness and acceptance by older adults, with and without cognitive impairment. PMID:28113672
A Robotic Coach Architecture for Elder Care (ROCARE) Based on Multi-User Engagement Models.
Fan, Jing; Bian, Dayi; Zheng, Zhi; Beuscher, Linda; Newhouse, Paul A; Mion, Lorraine C; Sarkar, Nilanjan
2017-08-01
The aging population with its concomitant medical conditions, physical and cognitive impairments, at a time of strained resources, establishes the urgent need to explore advanced technologies that may enhance function and quality of life. Recently, robotic technology, especially socially assistive robotics has been investigated to address the physical, cognitive, and social needs of older adults. Most system to date have predominantly focused on one-on-one human robot interaction (HRI). In this paper, we present a multi-user engagement-based robotic coach system architecture (ROCARE). ROCARE is capable of administering both one-on-one and multi-user HRI, providing implicit and explicit channels of communication, and individualized activity management for long-term engagement. Two preliminary feasibility studies, a one-on-one interaction and a triadic interaction with two humans and a robot, were conducted and the results indicated potential usefulness and acceptance by older adults, with and without cognitive impairment.
Animal Robot Assisted-therapy for Rehabilitation of Patient with Post-Stroke Depression
NASA Astrophysics Data System (ADS)
Zikril Zulkifli, Winal; Shamsuddin, Syamimi; Hwee, Lim Thiam
2017-06-01
Recently, the utilization of therapeutic animal robots has expanded. This research aims to explore robotics application for mental healthcare in Malaysia through human-robot interaction (HRI). PARO, the robotic seal PARO was developed to give psychological effects on humans. Major Depressive Disorder (MDD) is a common but severe mood disorder. This study focuses on the interaction protocol between PARO and patients with MDD. Initially, twelve rehabilitation patients gave subjective evaluation on their first interaction with PARO. Next, therapeutic interaction environment was set-up with PARO in it to act as an augmentation strategy with other psychological interventions for post-stroke depression. Patient was exposed to PARO for 20 minutes. The results of behavioural analysis complemented with information from HRI survey question. The analysis also observed that the individual interactors engaged with the robot in diverse ways based on their needs Results show positive reaction toward the acceptance of an animal robot. Next, therapeutic interaction is set-up for PARO to contribute as an augmentation strategy with other psychological interventions for post-stroke depression. The outcome is to reduce the stress level among patients through facilitated therapy session with PARO
Long-term knowledge acquisition using contextual information in a memory-inspired robot architecture
NASA Astrophysics Data System (ADS)
Pratama, Ferdian; Mastrogiovanni, Fulvio; Lee, Soon Geul; Chong, Nak Young
2017-03-01
In this paper, we present a novel cognitive framework allowing a robot to form memories of relevant traits of its perceptions and to recall them when necessary. The framework is based on two main principles: on the one hand, we propose an architecture inspired by current knowledge in human memory organisation; on the other hand, we integrate such an architecture with the notion of context, which is used to modulate the knowledge acquisition process when consolidating memories and forming new ones, as well as with the notion of familiarity, which is employed to retrieve proper memories given relevant cues. Although much research has been carried out, which exploits Machine Learning approaches to provide robots with internal models of their environment (including objects and occurring events therein), we argue that such approaches may not be the right direction to follow if a long-term, continuous knowledge acquisition is to be achieved. As a case study scenario, we focus on both robot-environment and human-robot interaction processes. In case of robot-environment interaction, a robot performs pick and place movements using the objects in the workspace, at the same time observing their displacement on a table in front of it, and progressively forms memories defined as relevant cues (e.g. colour, shape or relative position) in a context-aware fashion. As far as human-robot interaction is concerned, the robot can recall specific snapshots representing past events using both sensory information and contextual cues upon request by humans.
Thellman, Sam; Silvervarg, Annika; Ziemke, Tom
2017-01-01
People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.
Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.
Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos
2017-06-05
Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315
An argument for human exploration of the moon and Mars.
Spudis, P D
1992-01-01
A debate of the merits of human space travel as opposed to robots is presented. While robotic space travel would be considerably less expensive, the author takes the position that there are certain skills and research abilities that only humans possess. Human contributions to past lunar exploration are considered, along with a discussion of the interaction of humans with robotics or other artificial intelligence or computer driven technologies. The author concludes that while robots and machines are tools which should be incorporated into space travel, they are not adequate substitutes for people.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-11-24
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-01-01
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051
Coordinating a Team of Robots for Urban Reconnaisance
2010-11-01
Land Warfare Conference 2010 Brisbane November 2010 Coordinating a Team of Robots for Urban Reconnaisance Pradeep Ranganathan , Ryan...without inundating him with micro- management . Behavorial autonomy is also critical for the human operator to productively interact Figure 1: A...today’s systems, a human operator controls a single robot, micro- managing every action. This micro- management becomes impossible with more robots: in
Human-like object tracking and gaze estimation with PKD android
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.
2018-01-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193
Human-like object tracking and gaze estimation with PKD android
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.
2016-05-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.
Measuring empathy for human and robot hand pain using electroencephalography.
Suzuki, Yutaka; Galli, Lisa; Ikeda, Ayaka; Itakura, Shoji; Kitazaki, Michiteru
2015-11-03
This study provides the first physiological evidence of humans' ability to empathize with robot pain and highlights the difference in empathy for humans and robots. We performed electroencephalography in 15 healthy adults who observed either human- or robot-hand pictures in painful or non-painful situations such as a finger cut by a knife. We found that the descending phase of the P3 component was larger for the painful stimuli than the non-painful stimuli, regardless of whether the hand belonged to a human or robot. In contrast, the ascending phase of the P3 component at the frontal-central electrodes was increased by painful human stimuli but not painful robot stimuli, though the interaction of ANOVA was not significant, but marginal. These results suggest that we empathize with humanoid robots in late top-down processing similarly to human others. However, the beginning of the top-down process of empathy is weaker for robots than for humans.
NASA Astrophysics Data System (ADS)
Bartolozzi, Chiara; Natale, Lorenzo; Nori, Francesco; Metta, Giorgio
2016-09-01
Tactile sensors provide robots with the ability to interact with humans and the environment with great accuracy, yet technical challenges remain for electronic-skin systems to reach human-level performance.
On the stiffness analysis of a cable driven leg exoskeleton.
Sanjeevi, N S S; Vashista, Vineet
2017-07-01
Robotic systems are being used for gait rehabilitation of patients with neurological disorder. These devices are externally powered to apply external forces on human limbs to assist the leg motion. Patients while walking with these devices adapt their walking pattern in response to the applied forces. The efficacy of a rehabilitation paradigm thus depends on the human-robot interaction. A cable driven leg exoskeleton (CDLE) use actuated cables to apply external joint torques on human leg. Cables are lightweight and flexible but can only be pulled, thus a CDLE requires redundant cables. Redundancy in CDLE can be utilized to appropriately tune a robot's performance. In this work, we present the stiffness analysis of CDLE. Different stiffness performance indices are established to study the role of system parameters in improving the human-robot interaction.
Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J; Wrede, Britta
2014-01-01
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.
Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J.; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J.; Wrede, Britta
2014-01-01
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction. PMID:24646510
Ethorobotics: A New Approach to Human-Robot Relationship
Miklósi, Ádám; Korondi, Péter; Matellán, Vicente; Gácsi, Márta
2017-01-01
Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human behaviors: ecology and ethology. We show how the paradox of the so called “uncanny valley hypothesis” can be solved by applying the “niche” concept to social robots, and relying on the natural behavior of humans. Instead of striving to build human-like social robots, engineers should construct robots that are able to maximize their performance in their niche (being optimal for some specific functions), and if they are endowed with appropriate form of social competence then humans will eventually interact with them independent of their embodiment. This new discipline, which we call ethorobotics, could change social robotics, giving a boost to new technical approaches and applications. PMID:28649213
Robotic Technology Development at Ames: The Intelligent Robotics Group and Surface Telerobotics
NASA Technical Reports Server (NTRS)
Bualat, Maria; Fong, Terrence
2013-01-01
Future human missions to the Moon, Mars, and other destinations offer many new opportunities for exploration. But, astronaut time will always be limited and some work will not be feasible for humans to do manually. Robots, however, can complement human explorers, performing work autonomously or under remote supervision from Earth. Since 2004, the Intelligent Robotics Group has been working to make human-robot interaction efficient and effective for space exploration. A central focus of our research has been to develop and field test robots that benefit human exploration. Our approach is inspired by lessons learned from the Mars Exploration Rovers, as well as human spaceflight programs, including Apollo, the Space Shuttle, and the International Space Station. We conduct applied research in computer vision, geospatial data systems, human-robot interaction, planetary mapping and robot software. In planning for future exploration missions, architecture and study teams have made numerous assumptions about how crew can be telepresent on a planetary surface by remotely operating surface robots from space (i.e. from a flight vehicle or deep space habitat). These assumptions include estimates of technology maturity, existing technology gaps, and likely operational and functional risks. These assumptions, however, are not grounded by actual experimental data. Moreover, no crew-controlled surface telerobotic system has yet been fully tested, or rigorously validated, through flight testing. During Summer 2013, we conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover across short time delays. The tests simulated portions of a proposed human-robotic Lunar Waypoint mission, in which astronauts in lunar orbit remotely operate a planetary rover on the lunar Farside to deploy a radio telescope array. We used these tests to obtain baseline-engineering data.
A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot.
Kim, Min Su; Lee, Jae Geun; Kang, Soon Ju
2016-06-08
In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots "Pet-Bot". To implement this function in a robot, in this paper we introduce a self-organizing interaction and synchronization method between wearable devices and Pet-Bots. First, the Pet-Bot opportunistically identifies its owner without any human intervention, which means that the robot self-identifies the owner's approach on its own. Second, Pet-Bot's activity is synchronized with the owner's behavior. Lastly, the robot frequently encounters uncertain situations (e.g., when the robot goes ahead of the owner but meets a situation where it cannot make a decision, or the owner wants to stop the Pet-Bot synchronization mode to relax). In this case, we have adopted a gesture recognition function that uses a 3-D accelerometer in the wearable device. In order to achieve the interaction and synchronization in real-time, we use two wireless communication protocols: 125 kHz low-frequency (LF) and 2.4 GHz Bluetooth low energy (BLE). We conducted experiments using a prototype Pet-Bot and wearable devices to verify their motion recognition of and synchronization with humans in real-time. The results showed a guaranteed level of accuracy of at least 94%. A trajectory test was also performed to demonstrate the robot's control performance when following or leading a human in real-time.
ERIC Educational Resources Information Center
Simut, Ramona E.; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram
2016-01-01
Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if…
Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD.
Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan
2015-12-01
Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and feedback. The system was designed to incorporate both a humanoid robot and a human examiner. We compared child performance within system across these conditions in a sample of preschool children with ASD (n = 8) and a control sample of typically developing children (n = 8). The system was well-tolerated in the sample, children with ASD exhibited greater attention to the robotic system than the human administrator, and for children with ASD imitation performance appeared superior during the robotic interaction.
Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration.
Lasota, Przemyslaw A; Shah, Julie A
2015-02-01
The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human-robot interaction. We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human-robot team fluency and human worker satisfaction. Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human-robot collaboration.
Ueyama, Yuki
2015-01-01
One of the core features of autism spectrum disorder (ASD) is impaired reciprocal social interaction, especially in processing emotional information. Social robots are used to encourage children with ASD to take the initiative and to interact with the robotic tools to stimulate emotional responses. However, the existing evidence is limited by poor trial designs. The purpose of this study was to provide computational evidence in support of robot-assisted therapy for children with ASD. We thus propose an emotional model of ASD that adapts a Bayesian model of the uncanny valley effect, which holds that a human-looking robot can provoke repulsion and sensations of eeriness. Based on the unique emotional responses of children with ASD to the robots, we postulate that ASD induces a unique emotional response curve, more like a cliff than a valley. Thus, we performed numerical simulations of robot-assisted therapy to evaluate its effects. The results showed that, although a stimulus fell into the uncanny valley in the typical condition, it was effective at avoiding the uncanny cliff in the ASD condition. Consequently, individuals with ASD may find it more comfortable, and may modify their emotional response, if the robots look like deformed humans, even if they appear “creepy” to typical individuals. Therefore, we suggest that our model explains the effects of robot-assisted therapy in children with ASD and that human-looking robots may have potential advantages for improving social interactions in ASD. PMID:26389805
ERIC Educational Resources Information Center
Arita, A.; Hiraki, K.; Kanda, T.; Ishiguro, H.
2005-01-01
As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the…
Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction.
Chen, Tiffany L; Bhattacharjee, Tapomayukh; McKay, J Lucas; Borinski, Jacquelyn E; Hackney, Madeleine E; Ting, Lena H; Kemp, Charles C
2015-01-01
Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions.
Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction
Chen, Tiffany L.; Bhattacharjee, Tapomayukh; McKay, J. Lucas; Borinski, Jacquelyn E.; Hackney, Madeleine E.; Ting, Lena H.; Kemp, Charles C.
2015-01-01
Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions. PMID:25993099
Human brain spots emotion in non humanoid robots
Foucher, Aurélie; Jouvent, Roland; Nadel, Jacqueline
2011-01-01
The computation by which our brain elaborates fast responses to emotional expressions is currently an active field of brain studies. Previous studies have focused on stimuli taken from everyday life. Here, we investigated event-related potentials in response to happy vs neutral stimuli of human and non-humanoid robots. At the behavioural level, emotion shortened reaction times similarly for robotic and human stimuli. Early P1 wave was enhanced in response to happy compared to neutral expressions for robotic as well as for human stimuli, suggesting that emotion from robots is encoded as early as human emotion expression. Congruent with their lower faceness properties compared to human stimuli, robots elicited a later and lower N170 component than human stimuli. These findings challenge the claim that robots need to present an anthropomorphic aspect to interact with humans. Taken together, such results suggest that the early brain processing of emotional expressions is not bounded to human-like arrangements embodying emotion. PMID:20194513
Pragmatic Frames for Teaching and Learning in Human–Robot Interaction: Review and Challenges
Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J.; Oudeyer, Pierre-Yves
2016-01-01
One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning–teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human–human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching. PMID:27752242
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
A meta-analysis of factors affecting trust in human-robot interaction.
Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja
2011-10-01
We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.
Thellman, Sam; Silvervarg, Annika; Ziemke, Tom
2017-01-01
People rely on shared folk-psychological theories when judging behavior. These theories guide people’s social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people’s judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior – (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people’s intentional stance toward the robot was in this case very similar to their stance toward the human. PMID:29184519
Human-Robot Interaction in High Vulnerability Domains
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2016-01-01
Future NASA missions will require successful integration of the human with highly complex systems. Highly complex systems are likely to involve humans, automation, and some level of robotic assistance. The complex environments will require successful integration of the human with automation, with robots, and with human-automation-robot teams to accomplish mission critical goals. Many challenges exist for the human performing in these types of operational environments with these kinds of systems. Systems must be designed to optimally integrate various levels of inputs and outputs based on the roles and responsibilities of the human, the automation, and the robots; from direct manual control, shared human-robotic control, or no active human control (i.e. human supervisory control). It is assumed that the human will remain involved at some level. Technologies that vary based on contextual demands and on operator characteristics (workload, situation awareness) will be needed when the human integrates into these systems. Predictive models that estimate the impact of the technologies on the system performance and the on the human operator are also needed to meet the challenges associated with such future complex human-automation-robot systems in extreme environments.
Forming Human-Robot Teams Across Time and Space
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.
2012-01-01
NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot, so it involves both internal fault detection as well as force sensing for unintended external contacts. The designs for the supervisory command mode and the redundant safety system will be described. Specific implementations were developed and test results will be reported. Experiments were conducted using terrestrial analogs for deep space missions, where time delays were artificially added to emulate the longer distances found in space.
ERIC Educational Resources Information Center
Dunst, Carl J.; Trivette, Carol M.; Prior, Jeremy; Hamby, Deborah W.; Embler, Davon
2013-01-01
Findings from a survey of parents' ratings of seven different human-like qualities of four socially interactive robots are reported. The four robots were Popchilla, Keepon, Kaspar, and CosmoBot. The participants were 96 parents and other primary caregivers of young children with disabilities 1 to 12 years of age. Results showed that Popchilla, a…
Robots for better health and quality of life. | NIH MedlinePlus the Magazine
... page please turn JavaScript on. Feature: Robotic Innovations Robots for better health and quality of life. Past ... of Child Health and Human Development. A social-robot "buddy" for kids A preschooler interacts with a ...
Applications of artificial intelligence in safe human-robot interactions.
Najmaei, Nima; Kermani, Mehrdad R
2011-04-01
The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.
So, Wing-Chee; Wong, Miranda Kit-Yi; Lam, Carrie Ka-Yee; Lam, Wan-Yi; Chui, Anthony Tsz-Fung; Lee, Tsz-Lok; Ng, Hoi-Man; Chan, Chun-Hung; Fok, Daniel Chun-Wing
2017-07-04
While it has been argued that children with autism spectrum disorders are responsive to robot-like toys, very little research has examined the impact of robot-based intervention on gesture use. These children have delayed gestural development. We used a social robot in two phases to teach them to recognize and produce eight pantomime gestures that expressed feelings and needs. Compared to the children in the wait-list control group (N = 6), those in the intervention group (N = 7) were more likely to recognize gestures and to gesture accurately in trained and untrained scenarios. They also generalized the acquired recognition (but not production) skills to human-to-human interaction. The benefits and limitations of robot-based intervention for gestural learning were highlighted. Implications for Rehabilitation Compared to typically-developing children, children with autism spectrum disorders have delayed development of gesture comprehension and production. Robot-based intervention program was developed to teach children with autism spectrum disorders recognition (Phase I) and production (Phase II) of eight pantomime gestures that expressed feelings and needs. Children in the intervention group (but not in the wait-list control group) were able to recognize more gestures in both trained and untrained scenarios and generalize the acquired gestural recognition skills to human-to-human interaction. Similar findings were reported for gestural production except that there was no strong evidence showing children in the intervention group could produce gestures accurately in human-to-human interaction.
Ghost-in-the-Machine reveals human social signals for human–robot interaction
Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P.
2015-01-01
We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience. PMID:26582998
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
Vocal emotion of humanoid robots: a study from brain mechanism.
Wang, Youhui; Hu, Xiaohua; Dai, Weihui; Zhou, Jie; Kuo, Taitzong
2014-01-01
Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.
Affordance Templates for Shared Robot Control
NASA Technical Reports Server (NTRS)
Hart, Stephen; Dinh, Paul; Hambuchen, Kim
2014-01-01
This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.
NASA Astrophysics Data System (ADS)
Takasugi, Shoji; Yamamoto, Tomohito; Muto, Yumiko; Abe, Hiroyuki; Miyake, Yoshihiro
The purpose of this study is to clarify the effects of timing control of utterance and body motion in human-robot interaction. Our previous study has already revealed the correlation of timing of utterance and body motion in human-human communication. Here we proposed a timing control model based on our previous research and estimated its influence to realize human-like communication using a questionnaire method. The results showed that the difference of effectiveness between the communication with the timing control model and that without it was observed. In addition, elderly people evaluated the communication with timing control much higher than younger people. These results show not only the importance of timing control of utterance and body motion in human communication but also its effectiveness for realizing human-like human-robot interaction.
Augmented Robotics Dialog System for Enhancing Human-Robot Interaction.
Alonso-Martín, Fernando; Castro-González, Aĺvaro; Luengo, Francisco Javier Fernandez de Gorostiza; Salichs, Miguel Ángel
2015-07-03
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human-robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human-robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications.
Mobile app for human-interaction with sitter robots
NASA Astrophysics Data System (ADS)
Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.
2017-05-01
Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.
Emergent of Burden Sharing of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Kusano, Takuya; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. Multi robots system is able to adapt to various circumstances and has a flexibility for variation of tasks. Robots are necessary that build a cooperative relations and acts as an organization to attain a purpose in multi robots system. Then, group behavior of insects which doesn't have advanced ability is observed. For example, ants called a sociality insect emerge systematic activities by the interaction with using a very simple way. Though ants make a communication with chemical matter, a human plans a communication by words and gestures. In this paper, we paid attention to the interaction based on psychological viewpoint. And a human's emotion model was used for the parameter which became a base of the motion planning of robots. These robots were made to do both-way action in test field with obstacle. As a result, a burden sharing like guide or carrier was seen even though those had a simple setup.
A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot
Kim, Min Su; Lee, Jae Geun; Kang, Soon Ju
2016-01-01
In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots “Pet-Bot”. To implement this function in a robot, in this paper we introduce a self-organizing interaction and synchronization method between wearable devices and Pet-Bots. First, the Pet-Bot opportunistically identifies its owner without any human intervention, which means that the robot self-identifies the owner’s approach on its own. Second, Pet-Bot’s activity is synchronized with the owner’s behavior. Lastly, the robot frequently encounters uncertain situations (e.g., when the robot goes ahead of the owner but meets a situation where it cannot make a decision, or the owner wants to stop the Pet-Bot synchronization mode to relax). In this case, we have adopted a gesture recognition function that uses a 3-D accelerometer in the wearable device. In order to achieve the interaction and synchronization in real-time, we use two wireless communication protocols: 125 kHz low-frequency (LF) and 2.4 GHz Bluetooth low energy (BLE). We conducted experiments using a prototype Pet-Bot and wearable devices to verify their motion recognition of and synchronization with humans in real-time. The results showed a guaranteed level of accuracy of at least 94%. A trajectory test was also performed to demonstrate the robot’s control performance when following or leading a human in real-time. PMID:27338384
Interactive autonomy and robotic skills
NASA Technical Reports Server (NTRS)
Kellner, A.; Maediger, B.
1994-01-01
Current concepts of robot-supported operations for space laboratories (payload servicing, inspection, repair, and ORU exchange) are mainly based on the concept of 'interactive autonomy' which implies autonomous behavior of the robot according to predefined timelines, predefined sequences of elementary robot operations and within predefined world models supplying geometrical and other information for parameter instantiation on the one hand, and the ability to override and change the predefined course of activities by human intervention on the other hand. Although in principle a very powerful and useful concept, in practice the confinement of the robot to the abstract world models and predefined activities appears to reduce the robot's stability within real world uncertainties and its applicability to non-predefined parts of the world, calling for frequent corrective interaction by the operator, which in itself may be tedious and time-consuming. Methods are presented to improve this situation by incorporating 'robotic skills' into the concept of interactive autonomy.
Coeckelbergh, Mark; Pop, Cristina; Simut, Ramona; Peca, Andreea; Pintea, Sebastian; David, Daniel; Vanderborght, Bram
2016-02-01
The use of robots in therapy for children with autism spectrum disorder (ASD) raises issues concerning the ethical and social acceptability of this technology and, more generally, about human-robot interaction. However, usually philosophical papers on the ethics of human-robot-interaction do not take into account stakeholders' views; yet it is important to involve stakeholders in order to render the research responsive to concerns within the autism and autism therapy community. To support responsible research and innovation in this field, this paper identifies a range of ethical, social and therapeutic concerns, and presents and discusses the results of an exploratory survey that investigated these issues and explored stakeholders' expectations about this kind of therapy. We conclude that although in general stakeholders approve of using robots in therapy for children with ASD, it is wise to avoid replacing therapists by robots and to develop and use robots that have what we call supervised autonomy. This is likely to create more trust among stakeholders and improve the quality of the therapy. Moreover, our research suggests that issues concerning the appearance of the robot need to be adequately dealt with by the researchers and therapists. For instance, our survey suggests that zoomorphic robots may be less problematic than robots that look too much like humans.
Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.
Yoo, Bum-Soo; Kim, Jong-Hwan
2015-09-01
During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.
Reversal Learning Task in Children with Autism Spectrum Disorder: A Robot-Based Approach.
Costescu, Cristina A; Vanderborght, Bram; David, Daniel O
2015-11-01
Children with autism spectrum disorder (ASD) engage in highly perseverative and inflexible behaviours. Technological tools, such as robots, received increased attention as social reinforces and/or assisting tools for improving the performance of children with ASD. The aim of our study is to investigate the role of the robotic toy Keepon in a cognitive flexibility task performed by children with ASD and typically developing (TD) children. The number of participants included in this study is 81 children: 40 TD children and 41 children with ASD. Each participant had to go through two conditions: robot interaction and human interaction in which they had performed the reversal learning task. Our primary outcomes are the number of errors from acquisition phase and from reversal phase of the task; as secondary outcomes we have measured attentional engagement and positive affect. The results of this study showed that children with ASD are more engaged in the task and they seem to enjoy more the task when interacting with the robot compared with the interaction with the adult. On the other hand their cognitive flexibility performance is, in general, similar in the robot and the human conditions with the exception of the learning phase where the robot can interfere with the performance. Implication for future research and practice are discussed.
Eizicovits, Danny; Edan, Yael; Tabak, Iris; Levy-Tzedek, Shelly
2018-01-01
Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. The timing of the participants' movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group's motivation to continue playing. We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement - an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.
Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
Normand, Jean-Marie; Sanchez-Vives, Maria V; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A; Slater, Mel
2012-01-01
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Vocal Emotion of Humanoid Robots: A Study from Brain Mechanism
Wang, Youhui; Hu, Xiaohua; Zhou, Jie; Kuo, Taitzong
2014-01-01
Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings. PMID:24587712
Using Empathy to Improve Human-Robot Relationships
NASA Astrophysics Data System (ADS)
Pereira, André; Leite, Iolanda; Mascarenhas, Samuel; Martinho, Carlos; Paiva, Ana
For robots to become our personal companions in the future, they need to know how to socially interact with us. One defining characteristic of human social behaviour is empathy. In this paper, we present a robot that acts as a social companion expressing different kinds of empathic behaviours through its facial expressions and utterances. The robot comments the moves of two subjects playing a chess game against each other, being empathic to one of them and neutral towards the other. The results of a pilot study suggest that users to whom the robot was empathic perceived the robot more as a friend.
Adapting GOMS to Model Human-Robot Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, Jill; Scholtz, Jean; Kieras, David
2007-03-09
Human-robot interaction (HRI) has been maturing in tandem with robots’ commercial success. In the last few years HRI researchers have been adopting—and sometimes adapting—human-computer interaction (HCI) evaluation techniques to assess the efficiency and intuitiveness of HRI designs. For example, Adams (2005) used Goal Directed Task Analysis to determine the interaction needs of officers from the Nashville Metro Police Bomb Squad. Scholtz et al. (2004) used Endsley’s (1988) Situation Awareness Global Assessment Technique to determine robotic vehicle supervisors’ awareness of when vehicles were in trouble and thus required closer monitoring or intervention. Yanco and Drury (2004) employed usability testing to determinemore » (among other things) how well a search-andrescue interface supported use by first responders. One set of HCI tools that has so far seen little exploration in the HRI domain, however, is the class of modeling and evaluation techniques known as formal methods.« less
Motor contagion during human-human and human-robot interaction.
Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry
2014-01-01
Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.
Motor Contagion during Human-Human and Human-Robot Interaction
Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry
2014-01-01
Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990
Fuzzy variable impedance control based on stiffness identification for human-robot cooperation
NASA Astrophysics Data System (ADS)
Mao, Dachao; Yang, Wenlong; Du, Zhijiang
2017-06-01
This paper presents a dynamic fuzzy variable impedance control algorithm for human-robot cooperation. In order to estimate the intention of human for co-manipulation, a fuzzy inference system is set up to adjust the impedance parameter. Aiming at regulating the output fuzzy universe based on the human arm’s stiffness, an online stiffness identification method is developed. A drag interaction task is conducted on a 5-DOF robot with variable impedance control. Experimental results demonstrate that the proposed algorithm is superior.
Social Engagement in Public Places: A Tale of One Robot
2014-03-01
study we examined a prediction of Computers Are Social Actors (CASA) framework: the more machines present human -like characteristics in a consistent...social cues to increasing levels of social cues during story-telling to human -like game-playing interaction. We found several strong aspects of...support for CASA: the robot that provides even minimal social cues (speech) is more engaging than a robot that does nothing, and the more human -like the
Social cognitive neuroscience and humanoid robotics.
Chaminade, Thierry; Cheng, Gordon
2009-01-01
We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.
Improving Emergency Response and Human-Robotic Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
David I. Gertman; David J. Bruemmer; R. Scott Hartley
2007-08-01
Preparedness for chemical, biological, and radiological/nuclear incidents at nuclear power plants (NPPs) includes the deployment of well trained emergency response teams. While teams are expected to do well, data from other domains suggests that the timeliness and accuracy associated with incident response can be improved through collaborative human-robotic interaction. Many incident response scenarios call for multiple, complex procedure-based activities performed by personnel wearing cumbersome personal protective equipment (PPE) and operating under high levels of stress and workload. While robotic assistance is postulated to reduce workload and exposure, limitations associated with communications and the robot’s ability to act independently have servedmore » to limit reliability and reduce our potential to exploit human –robotic interaction and efficacy of response. Recent work at the Idaho National Laboratory (INL) on expanding robot capability has the potential to improve human-system response during disaster management and recovery. Specifically, increasing the range of higher level robot behaviors such as autonomous navigation and mapping, evolving new abstractions for sensor and control data, and developing metaphors for operator control have the potential to improve state-of-the-art in incident response. This paper discusses these issues and reports on experiments underway intelligence residing on the robot to enhance emergency response.« less
Investigating the ability to read others' intentions using humanoid robots.
Sciutti, Alessandra; Ansuini, Caterina; Becchio, Cristina; Sandini, Giulio
2015-01-01
The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.
Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration
Shah, Julie A.
2015-01-01
Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration. PMID:25790568
An intelligent robotic aid system for human services
NASA Technical Reports Server (NTRS)
Kawamura, K.; Bagchi, S.; Iskarous, M.; Pack, R. T.; Saad, A.
1994-01-01
The long term goal of our research at the Intelligent Robotic Laboratory at Vanderbilt University is to develop advanced intelligent robotic aid systems for human services. As a first step toward our goal, the current thrusts of our R&D are centered on the development of an intelligent robotic aid called the ISAC (Intelligent Soft Arm Control). In this paper, we describe the overall system architecture and current activities in intelligent control, adaptive/interactive control and task learning.
Common Metrics for Human-Robot Interaction
NASA Technical Reports Server (NTRS)
Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael
2006-01-01
This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.
Normand, Jean-Marie; Sanchez-Vives, Maria V.; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A.; Slater, Mel
2012-01-01
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale. PMID:23118987
A Face Attention Technique for a Robot Able to Interpret Facial Expressions
NASA Astrophysics Data System (ADS)
Simplício, Carlos; Prado, José; Dias, Jorge
Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
Human-Robot Site Survey and Sampling for Space Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Bualat, Maria; Edwards, Laurence; Flueckiger, Lorenzo; Kunz, Clayton; Lee, Susan Y.; Park, Eric; To, Vinh; Utz, Hans; Ackner, Nir
2006-01-01
NASA is planning to send humans and robots back to the Moon before 2020. In order for extended missions to be productive, high quality maps of lunar terrain and resources are required. Although orbital images can provide much information, many features (local topography, resources, etc) will have to be characterized directly on the surface. To address this need, we are developing a system to perform site survey and sampling. The system includes multiple robots and humans operating in a variety of team configurations, coordinated via peer-to-peer human-robot interaction. In this paper, we present our system design and describe planned field tests.
Human exploration of Mars - The role of a Mars outpost laboratory
NASA Technical Reports Server (NTRS)
Duke, Michael B.
1992-01-01
Consideration is given to a Martian exploration strategy which includes intensive robotic reconnaissance to characterize features of Mars' geology that are important to the solution of major problems of Mars history, including the possible past presence of life. A human reconnaissance phase may follow the robotic reconnaissance phase, guided to the most productive sites by the results of the robotic missions. The strategy also involves an intensive human phase of investigation, with interactive field geology/laboratory investigation at the Mars outpost. The laboratory investigations, as well as the field work, should be highly interactive with a broad scientific community on earth. The most detailed analyses would be performed on samples returned to earth.
Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.
Borenstein, Jason; Arkin, Ron
2016-02-01
Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.
Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.
de Greeff, Joachim; Belpaeme, Tony
2015-01-01
Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.
Influence of facial feedback during a cooperative human-robot task in schizophrenia.
Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane
2017-11-03
Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.
Robots for use in autism research.
Scassellati, Brian; Admoni, Henny; Matarić, Maja
2012-01-01
Autism spectrum disorders are a group of lifelong disabilities that affect people's ability to communicate and to understand social cues. Research into applying robots as therapy tools has shown that robots seem to improve engagement and elicit novel social behaviors from people (particularly children and teenagers) with autism. Robot therapy for autism has been explored as one of the first application domains in the field of socially assistive robotics (SAR), which aims to develop robots that assist people with special needs through social interactions. In this review, we discuss the past decade's work in SAR systems designed for autism therapy by analyzing robot design decisions, human-robot interactions, and system evaluations. We conclude by discussing challenges and future trends for this young but rapidly developing research area.
Using mixed-initiative human-robot interaction to bound performance in a search task
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; Douglas A. Few; Devin S. Athey
2008-12-01
Mobile robots are increasingly used in dangerous domains, because they can keep humans out of harm’s way. Despite their advantages in hazardous environments, their general acceptance in other less dangerous domains has not been apparent and, even in dangerous environments, robots are often viewed as a “last-possible choice.” In order to increase the utility and acceptance of robots in hazardous domains researchers at the Idaho National Laboratory have both developed and tested novel mixed-initiative solutions that support the human-robot interactions. In a recent “dirty-bomb” experiment, participants exhibited different search strategies making it difficult to determine any performance benefits. This papermore » presents a method for categorizing the search patterns and shows that the mixed-initiative solution decreased the time to complete the task and decreased the performance spread between participants independent of prior training and of individual strategies used to accomplish the task.« less
The mechanical design of a humanoid robot with flexible skin sensor for use in psychiatric therapy
NASA Astrophysics Data System (ADS)
Burns, Alec; Tadesse, Yonas
2014-03-01
In this paper, a humanoid robot is presented for ultimate use in the rehabilitation of children with mental disorders, such as autism. Creating affordable and efficient humanoids could assist the therapy in psychiatric disability by offering multimodal communication between the humanoid and humans. Yet, the humanoid development needs a seamless integration of artificial muscles, sensors, controllers and structures. We have designed a human-like robot that has 15 DOF, 580 mm tall and 925 mm arm span using a rapid prototyping system. The robot has a human-like appearance and movement. Flexible sensors around the arm and hands for safe human-robot interactions, and a two-wheel mobile platform for maneuverability are incorporated in the design. The robot has facial features for illustrating human-friendly behavior. The mechanical design of the robot and the characterization of the flexible sensors are presented. Comprehensive study on the upper body design, mobile base, actuators selection, electronics, and performance evaluation are included in this paper.
Recent trends in humanoid robotics research: scientific background, applications, and implications.
Solis, Jorge; Takanishi, Atsuo
2010-11-01
Even though the market size is still small at this moment, applied fields of robots are gradually spreading from the manufacturing industry to the others as one of the important components to support an aging society. For this purpose, the research on human-robot interaction (HRI) has been an emerging topic of interest for both basic research and customer application. The studies are especially focused on behavioral and cognitive aspects of the interaction and the social contexts surrounding it. As a part of these studies, the term of "roboethics" has been introduced as an approach to discuss the potentialities and the limits of robots in relation to human beings. In this article, we describe the recent research trends on the field of humanoid robotics. Their principal applications and their possible impact are discussed.
Calderita, Luis Vicente; Manso, Luis J; Bustos, Pablo; Fernández, Fernando; Bandera, Antonio
2014-01-01
Background Neurorehabilitation therapies exploiting the use-dependent plasticity of our neuromuscular system are devised to help patients who suffer from injuries or diseases of this system. These therapies take advantage of the fact that the motor activity alters the properties of our neurons and muscles, including the pattern of their connectivity, and thus their functionality. Hence, a sensor-motor treatment where patients makes certain movements will help them (re)learn how to move the affected body parts. But these traditional rehabilitation processes are usually repetitive and lengthy, reducing motivation and adherence to the treatment, and thus limiting the benefits for the patients. Objective Our goal was to create innovative neurorehabilitation therapies based on THERAPIST, a socially assistive robot. THERAPIST is an autonomous robot that is able to find and execute plans and adapt them to new situations in real-time. The software architecture of THERAPIST monitors and determines the course of action, learns from previous experiences, and interacts with people using verbal and non-verbal channels. THERAPIST can increase the adherence of the patient to the sessions using serious games. Data are recorded and can be used to tailor patient sessions. Methods We hypothesized that pediatric patients would engage better in a therapeutic non-physical interaction with a robot, facilitating the design of new therapies to improve patient motivation. We propose RoboCog, a novel cognitive architecture. This architecture will enhance the effectiveness and time-of-response of complex multi-degree-of-freedom robots designed to collaborate with humans, combining two core elements: a deep and hybrid representation of the current state, own, and observed; and a set of task-dependent planners, working at different levels of abstraction but connected to this central representation through a common interface. Using RoboCog, THERAPIST engages the human partner in an active interactive process. But RoboCog also endows the robot with abilities for high-level planning, monitoring, and learning. Thus, THERAPIST engages the patient through different games or activities, and adapts the session to each individual. Results RoboCog successfully integrates a deliberative planner with a set of modules working at situational or sensorimotor levels. This architecture also allows THERAPIST to deliver responses at a human rate. The synchronization of the multiple interaction modalities results from a unique scene representation or model. THERAPIST is now a socially interactive robot that, instead of reproducing the phrases or gestures that the developers decide, maintains a dialogue and autonomously generate gestures or expressions. THERAPIST is able to play simple games with human partners, which requires humans to perform certain movements, and also to capture the human motion, for later analysis by clinic specialists. Conclusions The initial hypothesis was validated by our experimental studies showing that interaction with the robot results in highly attentive and collaborative attitudes in pediatric patients. We also verified that RoboCog allows the robot to interact with patients at human rates. However, there remain many issues to overcome. The development of novel hands-off rehabilitation therapies will require the intersection of multiple challenging directions of research that we are currently exploring. PMID:28582242
A multimodal interface for real-time soldier-robot teaming
NASA Astrophysics Data System (ADS)
Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.
2016-05-01
Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
ERIC Educational Resources Information Center
Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.
2016-01-01
A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…
The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction
2009-03-01
distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita
Conference on Intelligent Robotics in Field, Factory, Service and Space (CIRFFSS 1994), Volume 2
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1994-01-01
The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservations can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed the following topics: (1) vision systems integration and architecture; (2) selective perception and human robot interaction; (3) robotic systems technology; (4) military and other field applications; (5) dual-use precommercial robotic technology; (6) building operations; (7) planetary exploration applications; (8) planning; (9) new directions in robotics; and (10) commercialization.
Sartorato, Felippe; Przybylowski, Leon; Sarko, Diana K
2017-07-01
For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.
Middleware Design for Swarm-Driving Robots Accompanying Humans.
Kim, Min Su; Kim, Sang Hyuck; Kang, Soon Ju
2017-02-17
Research on robots that accompany humans is being continuously studied. The Pet-Bot provides walking-assistance and object-carrying services without any specific controls through interaction between the robot and the human in real time. However, with Pet-Bot, there is a limit to the number of robots a user can use. If this limit is overcome, the Pet-Bot can provide services in more areas. Therefore, in this study, we propose a swarm-driving middleware design adopting the concept of a swarm, which provides effective parallel movement to allow multiple human-accompanying robots to accomplish a common purpose. The functions of middleware divide into three parts: a sequence manager for swarm process, a messaging manager, and a relative-location identification manager. This middleware processes the sequence of swarm-process of robots in the swarm through message exchanging using radio frequency (RF) communication of an IEEE 802.15.4 MAC protocol and manages an infrared (IR) communication module identifying relative location with IR signal strength. The swarm in this study is composed of the master interacting with the user and the slaves having no interaction with the user. This composition is intended to control the overall swarm in synchronization with the user activity, which is difficult to predict. We evaluate the accuracy of the relative-location estimation using IR communication, the response time of the slaves to a change in user activity, and the time to organize a network according to the number of slaves.
Middleware Design for Swarm-Driving Robots Accompanying Humans
Kim, Min Su; Kim, Sang Hyuck; Kang, Soon Ju
2017-01-01
Research on robots that accompany humans is being continuously studied. The Pet-Bot provides walking-assistance and object-carrying services without any specific controls through interaction between the robot and the human in real time. However, with Pet-Bot, there is a limit to the number of robots a user can use. If this limit is overcome, the Pet-Bot can provide services in more areas. Therefore, in this study, we propose a swarm-driving middleware design adopting the concept of a swarm, which provides effective parallel movement to allow multiple human-accompanying robots to accomplish a common purpose. The functions of middleware divide into three parts: a sequence manager for swarm process, a messaging manager, and a relative-location identification manager. This middleware processes the sequence of swarm-process of robots in the swarm through message exchanging using radio frequency (RF) communication of an IEEE 802.15.4 MAC protocol and manages an infrared (IR) communication module identifying relative location with IR signal strength. The swarm in this study is composed of the master interacting with the user and the slaves having no interaction with the user. This composition is intended to control the overall swarm in synchronization with the user activity, which is difficult to predict. We evaluate the accuracy of the relative-location estimation using IR communication, the response time of the slaves to a change in user activity, and the time to organize a network according to the number of slaves. PMID:28218650
Gergely, Anna; Petró, Eszter; Topál, József; Miklósi, Ádám
2013-01-01
Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs' interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs' interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs' behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs' social responsiveness.
Interactive multi-objective path planning through a palette-based user interface
NASA Astrophysics Data System (ADS)
Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph
2016-05-01
n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.
Tan, Huan; Liang, Chen
2011-01-01
This paper proposes a conceptual hybrid cognitive architecture for cognitive robots to learn behaviors from demonstrations in robotic aid situations. Unlike the current cognitive architectures, this architecture puts concentration on the requirements of the safety, the interaction, and the non-centralized processing in robotic aid situations. Imitation learning technologies for cognitive robots have been integrated into this architecture for rapidly transferring the knowledge and skills between human teachers and robots.
An Interactive Astronaut-Robot System with Gesture Control
Liu, Jinguo; Luo, Yifan; Ju, Zhaojie
2016-01-01
Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system. PMID:27190503
Robot Tracking of Human Subjects in Field Environments
NASA Technical Reports Server (NTRS)
Graham, Jeffrey; Shillcutt, Kimberly
2003-01-01
Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.
Physiological and subjective evaluation of a human-robot object hand-over task.
Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël
2011-11-01
In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.
New diagnostic tool for robotic psychology and robotherapy studies.
Libin, Elena; Libin, Alexander
2003-08-01
Robotic psychology and robotherapy as a new research area employs a systematic approach in studying psycho-physiological, psychological, and social aspects of person-robot communication. An analysis of the mechanisms underlying different forms of computer-mediated behavior requires both an adequate methodology and research tools. In the proposed article we discuss the concept, basic principles, structure, and contents of the newly designed Person-Robot Complex Interactive Scale (PRCIS), proposed for the purpose of investigating psychological specifics and therapeutic potentials of multilevel person-robot interactions. Assuming that human-robot communication has symbolic meaning, each interactive pattern evaluated via the newly developed scale is assigned certain psychological value associated with the person's past life experiences, likes and dislikes, emotional, cognitive, and behavioral traits or states. PRCIS includes (1) assessment of a person's individual style of communication with the robotic creature based on direct observations; (2) the participant's evaluation of his/her new experiences with an interactive robot and evaluation of its features, advantages and disadvantages, as well as past experiences with modern technology; and (3) the instructor's overall evaluation of the session.
Advances in Robotic, Human, and Autonomous Systems for Missions of Space Exploration
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)
2002-01-01
Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.
NASA Technical Reports Server (NTRS)
Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta
2012-01-01
Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies
From Autonomous Robots to Artificial Ecosystems
NASA Astrophysics Data System (ADS)
Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato
During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
NASA Astrophysics Data System (ADS)
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-02-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-01-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862
Hebbian Plasticity in CPG Controllers Facilitates Self-Synchronization for Human-Robot Handshaking.
Jouaiti, Melanie; Caron, Lancelot; Hénaff, Patrick
2018-01-01
It is well-known that human social interactions generate synchrony phenomena which are often unconscious. If the interaction between individuals is based on rhythmic movements, synchronized and coordinated movements will emerge from the social synchrony. This paper proposes a plausible model of plastic neural controllers that allows the emergence of synchronized movements in physical and rhythmical interactions. The controller is designed with central pattern generators (CPG) based on rhythmic Rowat-Selverston neurons endowed with neuronal and synaptic Hebbian plasticity. To demonstrate the interest of the proposed model, the case of handshaking is considered because it is a very common, both physically and socially, but also, a very complex act in the point of view of robotics, neuroscience and psychology. Plastic CPGs controllers are implemented in the joints of a simulated robotic arm that has to learn the frequency and amplitude of an external force applied to its effector, thus reproducing the act of handshaking with a human. Results show that the neural and synaptic Hebbian plasticity are working together leading to a natural and autonomous synchronization between the arm and the external force even if the frequency is changing during the movement. Moreover, a power consumption analysis shows that, by offering emergence of synchronized and coordinated movements, the plasticity mechanisms lead to a significant decrease in the energy spend by the robot actuators thus generating a more adaptive and natural human/robot handshake.
Multimodal interaction for human-robot teams
NASA Astrophysics Data System (ADS)
Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle
2013-05-01
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
A Flexible Sensor Technology for the Distributed Measurement of Interaction Pressure
Donati, Marco; Vitiello, Nicola; De Rossi, Stefano Marco Maria; Lenzi, Tommaso; Crea, Simona; Persichetti, Alessandro; Giovacchini, Francesco; Koopman, Bram; Podobnik, Janez; Munih, Marko; Carrozza, Maria Chiara
2013-01-01
We present a sensor technology for the measure of the physical human-robot interaction pressure developed in the last years at Scuola Superiore Sant'Anna. The system is composed of flexible matrices of opto-electronic sensors covered by a soft silicone cover. This sensory system is completely modular and scalable, allowing one to cover areas of any sizes and shapes, and to measure different pressure ranges. In this work we present the main application areas for this technology. A first generation of the system was used to monitor human-robot interaction in upper- (NEUROExos; Scuola Superiore Sant'Anna) and lower-limb (LOPES; University of Twente) exoskeletons for rehabilitation. A second generation, with increased resolution and wireless connection, was used to develop a pressure-sensitive foot insole and an improved human-robot interaction measurement systems. The experimental characterization of the latter system along with its validation on three healthy subjects is presented here for the first time. A perspective on future uses and development of the technology is finally drafted. PMID:23322104
Achieving Collaborative Interaction with a Humanoid Robot
2003-01-01
gestures will become more prevalent in the kinds of interactions we study. Gesturing is a natural part of human- human communication . It...to human communication . However, in human to human experiments, Tversky et al. observed a similar result and found that speakers took the
Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction
de Greeff, Joachim; Belpaeme, Tony
2015-01-01
Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children’s social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a “mental model” of the robot, tailoring the tutoring to the robot’s performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot’s bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance. PMID:26422143
A Framework to Describe, Analyze and Generate Interactive Motor Behaviors
Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne
2012-01-01
While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks. PMID:23226231
A framework to describe, analyze and generate interactive motor behaviors.
Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne
2012-01-01
While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks.
How to make an autonomous robot as a partner with humans: design approach versus emergent approach.
Fujita, M
2007-01-15
In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.
Self-Reconfiguration Planning of Robot Embodiment for Inherent Safe Performance
NASA Astrophysics Data System (ADS)
Uchida, Masafumi; Nozawa, Akio; Asano, Hirotoshi; Onogaki, Hitoshi; Mizuno, Tota; Park, Young-Il; Ide, Hideto; Yokoyama, Shuichi
In the situation in which a robot and a human work together by collaborating with each other, a robot and a human share one working environment, and each interferes in each other. In other ward, it is impossible to avoid the physical contact and the interaction of force between a robot and a human. The boundary of each complex dynamic occupation area changes in the connection movement which is the component of collaborative works at this time. The main restraint condition which relates to the robustness of that connection movement is each physical charactristics, that is, the embodiment. A robot body is variability though the embodiment of a human is almost fixed. Therefore, the safe and the robust connection movement is brought when a robot has the robot body which is well suitable for the embodiment of a human. A purpose for this research is that the colaboration works between the self-reconfiguration robot and a human is realized. To achieve this purpose, a self-reconfiguration algorithm based on some indexes to evaluate a robot body in the macroscopic point of view was examined on a modular robot system of the 2-D lattice structure. In this paper, it investigated effect specially that the object of learning of each individual was limited to the cooperative behavior between the adjoining modules toward the macroscopic evaluation index.
Multisensor-based human detection and tracking for mobile service robots.
Bellotto, Nicola; Hu, Huosheng
2009-02-01
One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.
Robotic Billiards: Understanding Humans in Order to Counter Them.
Nierhoff, Thomas; Leibrandt, Konrad; Lorenz, Tamara; Hirche, Sandra
2016-08-01
Ongoing technological advances in the areas of computation, sensing, and mechatronics enable robotic-based systems to interact with humans in the real world. To succeed against a human in a competitive scenario, a robot must anticipate the human behavior and include it in its own planning framework. Then it can predict the next human move and counter it accordingly, thus not only achieving overall better performance but also systematically exploiting the opponent's weak spots. Pool is used as a representative scenario to derive a model-based planning and control framework where not only the physics of the environment but also a model of the opponent is considered. By representing the game of pool as a Markov decision process and incorporating a model of the human decision-making based on studies, an optimized policy is derived. This enables the robot to include the opponent's typical game style into its tactical considerations when planning a stroke. The results are validated in simulations and real-life experiments with an anthropomorphic robot playing pool against a human.
Could robots become authentic companions in nursing care?
Metzler, Theodore A; Lewis, Lundy M; Pope, Linda C
2016-01-01
Creating android and humanoid robots to furnish companionship in the nursing care of older people continues to attract substantial development capital and research. Some people object, though, that machines of this kind furnish human-robot interaction characterized by inauthentic relationships. In particular, robotic and artificial intelligence (AI) technologies have been charged with substituting mindless mimicry of human behaviour for the real presence of conscious caring offered by human nurses. When thus viewed as deceptive, the robots also have prompted corresponding concerns regarding their potential psychological, moral, and spiritual implications for people who will be interacting socially with these machines. The foregoing objections and concerns can be assessed quite differently, depending upon ambient religious beliefs or metaphysical presuppositions. The complaints may be set aside as unnecessary, for example, within religious traditions for which even current robots can be viewed as presenting spiritual aspects. Elsewhere, technological cultures may reject the complaints as expression of outdated superstition, holding that the machines eventually will enjoy a consciousness described entirely in materialist and behaviourist terms. While recognizing such assessments, the authors of this essay propose that the heart of the foregoing objections and concerns may be evaluated, in part, scientifically - albeit with a conclusion recommending fundamental revisions in AI modelling of human mental life. Specifically, considerations now favour introduction of AI models using interactive classical and quantum computation. Without this change, the answer to the essay's title question arguably is 'no' - with it, the answer plausibly becomes 'maybe'. Either outcome holds very interesting implications for nurses. © 2015 John Wiley & Sons Ltd.
Mixed-Initiative Human-Robot Interaction: Definition, Taxonomy, and Survey
2015-01-01
response situations (i.e., harmful for human lives) that range from natural disasters (e.g., Fukushima nuclear plant meltdown [1]) to terrorist attacks... Fukushima Daiichi Nuclear Power Plants using mobile rescue robots," Journal of Field Robotics, vol. 30, pp. 44-63, 2013. [2] A. Davids, "Urban search...operating environment can be uncertain, unstructured, and hostile. The damaged Fukushima nuclear plant‟s high radiation level not only posed danger to
Gergely, Anna; Petró, Eszter; Topál, József; Miklósi, Ádám
2013-01-01
Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs’ interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs’ interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs’ behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs’ social responsiveness. PMID:24015272
Acceptance and Attitudes Toward a Human-like Socially Assistive Robot by Older Adults.
Louie, Wing-Yue Geoffrey; McColl, Derek; Nejat, Goldie
2014-01-01
Recent studies have shown that cognitive and social interventions are crucial to the overall health of older adults including their psychological, cognitive, and physical well-being. However, due to the rapidly growing elderly population of the world, the resources and people to provide these interventions is lacking. Our work focuses on the use of social robotic technologies to provide person-centered cognitive interventions. In this article, we investigate the acceptance and attitudes of older adults toward the human-like expressive socially assistive robot Brian 2.1 in order to determine if the robot's human-like assistive and social characteristics would promote the use of the robot as a cognitive and social interaction tool to aid with activities of daily living. The results of a robot acceptance questionnaire administered during a robot demonstration session with a group of 46 elderly adults showed that the majority of the individuals had positive attitudes toward the socially assistive robot and its intended applications.
Interactive language learning by robots: the transition from babbling to word forms.
Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe
2012-01-01
The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.
Human-Robot Planetary Exploration Teams
NASA Technical Reports Server (NTRS)
Tyree, Kimberly
2004-01-01
The EVA Robotic Assistant (ERA) project at NASA Johnson Space Center studies human-robot interaction and robotic assistance for future human planetary exploration. Over the past four years, the ERA project has been performing field tests with one or more four-wheeled robotic platforms and one or more space-suited humans. These tests have provided experience in how robots can assist humans, how robots and humans can communicate in remote environments, and what combination of humans and robots works best for different scenarios. The most efficient way to understand what tasks human explorers will actually perform, and how robots can best assist them, is to have human explorers and scientists go and explore in an outdoor, planetary-relevant environment, with robots to demonstrate what they are capable of, and roboticists to observe the results. It can be difficult to have a human expert itemize all the needed tasks required for exploration while sitting in a lab: humans do not always remember all the details, and experts in one arena may not even recognize that the lower level tasks they take for granted may be essential for a roboticist to know about. Field tests thus create conditions that more accurately reveal missing components and invalid assumptions, as well as allow tests and comparisons of new approaches and demonstrations of working systems. We have performed field tests in our local rock yard, in several locations in the Arizona desert, and in the Utah desert. We have tested multiple exploration scenarios, such as geological traverses, cable or solar panel deployments, and science instrument deployments. The configuration of our robot can be changed, based on what equipment is needed for a given scenario, and the sensor mast can even be placed on one of two robot bases, each with different motion capabilities. The software architecture of our robot is also designed to be as modular as possible, to allow for hardware and configuration changes. Two focus areas of our research are safety and crew time efficiency. For safety, our work involves enabling humans to reliably communicate with a robot while moving in the same workspace, and enabling robots to monitor and advise humans of potential problems. Voice, gesture, remote computer control, and enhanced robot intelligence are methods we are studying. For crew time efficiency, we are investigating the effects of assigning different roles to humans and robots in collaborative exploration scenarios.
Hwang, Jihong; Park, Taezoon; Hwang, Wonil
2013-05-01
The affective interaction between human and robots could be influenced by various aspects of robots, which are appearance, countenance, gesture, voice, etc. Among these, the overall shape of robot could play a key role in invoking desired emotions to the users and bestowing preferred personalities to robots. In this regard, the present study experimentally investigates the effects of overall robot shape on the emotions invoked in users and the perceived personalities of robot with an objective of deriving guidelines for the affective design of service robots. In so doing, 27 different shapes of robot were selected, modeled and fabricated, which were combinations of three different shapes of head, trunk and limb (legs and arms) - rectangular-parallelepiped, cylindrical and human-like shapes. For the experiment, visual images and real prototypes of these robot shapes were presented to participants, and emotions invoked and personalities perceived from the presented robots were measured. The results showed that the overall shape of robot arouses any of three emotions named 'concerned', 'enjoyable' and 'favorable', among which 'concerned' emotion is negatively correlated with the 'big five personality factors' while 'enjoyable' and 'favorable' emotions are positively correlated. It was found that the 'big five personality factors', and 'enjoyable' and 'favorable' emotions are more strongly perceived through the real prototypes than through the visual images. It was also found that the robot shape consisting of cylindrical head, human-like trunk and cylindrical head is the best for 'conscientious' personality and 'favorable' emotion, the robot shape consisting of cylindrical head, human-like trunk and human-like limb for 'extroverted' personality, the robot shape consisting of cylindrical head, cylindrical trunk and cylindrical limb for 'anti-neurotic' personality, and the robot shape consisting of rectangular-parallelepiped head, human-like trunk and human-like limb for 'enjoyable' emotion. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Research and development of service robot platform based on artificial psychology
NASA Astrophysics Data System (ADS)
Zhang, Xueyuan; Wang, Zhiliang; Wang, Fenhua; Nagai, Masatake
2007-12-01
Some related works about the control architecture of robot system are briefly summarized. According to the discussions above, this paper proposes control architecture of service robot based on artificial psychology. In this control architecture, the robot can obtain the cognition of environment through sensors, and then be handled with intelligent model, affective model and learning model, and finally express the reaction to the outside stimulation through its behavior. For better understanding the architecture, hierarchical structure is also discussed. The control system of robot can be divided into five layers, namely physical layer, drives layer, information-processing and behavior-programming layer, application layer and system inspection and control layer. This paper shows how to achieve system integration from hardware modules, software interface and fault diagnosis. Embedded system GENE-8310 is selected as the PC platform of robot APROS-I, and its primary memory media is CF card. The arms and body of the robot are constituted by 13 motors and some connecting fittings. Besides, the robot has a robot head with emotional facial expression, and the head has 13 DOFs. The emotional and intelligent model is one of the most important parts in human-machine interaction. In order to better simulate human emotion, an emotional interaction model for robot is proposed according to the theory of need levels of Maslom and mood information of Siminov. This architecture has already been used in our intelligent service robot.
Coordinating with Humans by Adjustable-Autonomy for Multirobot Pursuit (CHAMP)
NASA Astrophysics Data System (ADS)
Dumond, Danielle; Ayers, Jeanine; Schurr, Nathan; Carlin, Alan; Burke, Dustin; Rousseau, Jeffrey
2012-06-01
One of the primary challenges facing the modern small-unit tactical team is the ability of the unit to safely and effectively search, explore, clear and hold urbanized terrain that includes buildings, streets, and subterranean dwellings. Buildings provide cover and concealment to an enemy and restrict the movement of forces while diminishing their ability to engage the adversary. The use of robots has significant potential to reduce the risk to tactical teams and dramatically force multiply the small unit's footprint. Despite advances in robotic mobility, sensing capabilities, and human-robot interaction, the use of robots in room clearing operations remains nascent. CHAMP is a software system in development that integrates with a team of robotic platforms to enable them to coordinate with a human operator performing a search and pursuit task. In this way, the human operator can either give control to the robots to search autonomously, or can retain control and direct the robots where needed. CHAMP's autonomy is built upon a combination of adversarial pursuit algorithms and dynamic function allocation strategies that maximize the team's resources. Multi-modal interaction with CHAMP is achieved using novel gesture-recognition based capabilities to reduce the need for heads-down tele-operation. The Champ Coordination Algorithm addresses dynamic and limited team sizes, generates a novel map of the area, and takes into account mission goals, user preferences and team roles. In this paper we show results from preliminary simulated experiments and find that the CHAMP system performs faster than traditional search and pursuit algorithms.
The Role of Reciprocity in Verbally Persuasive Robots.
Lee, Seungcheol Austin; Liang, Yuhua Jake
2016-08-01
The current research examines the persuasive effects of reciprocity in the context of human-robot interaction. This is an important theoretical and practical extension of persuasive robotics by testing (1) if robots can utilize verbal requests and (2) if robots can utilize persuasive mechanisms (e.g., reciprocity) to gain human compliance. Participants played a trivia game with a robot teammate. The ostensibly autonomous robot helped (or failed to help) the participants by providing the correct (vs. incorrect) trivia answers. Then, the robot directly asked participants to complete a 15-minute task for pattern recognition. Compared to no help, results showed that a robot's prior helping behavior significantly increased the likelihood of compliance (60 percent vs. 33 percent). Interestingly, participants' evaluations toward the robot (i.e., competence, warmth, and trustworthiness) did not predict compliance. These results also provided an insightful comparison showing that participants complied at similar rates with the robot and with computer agents. This result documents a clear empirically powerful potential for the role of verbal messages in persuasive robotics.
Learning models of Human-Robot Interaction from small data
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G.; Heinz, Jeffrey
2018-01-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets. PMID:29492408
Learning models of Human-Robot Interaction from small data.
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G; Heinz, Jeffrey
2017-07-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets.
Using robots to help people habituate to visible disabilities.
Riek, Laurel D; Robinson, Peter
2011-01-01
We explore a new way of using robots as human-human social facilitators: inter-ability communication. This refers to communication between people with disabilities and those without disabilities. We have interviewed people with head and facial movement disorders (n = 4), and, using a vision-based approach, recreated their movements on our 27 degree-of-freedom android robot. We then conducted an exploratory experiment (n = 26) to see if the robot might serve as a suitable tool to allow people to practice inter-ability interaction on a robot before doing it with a person. Our results suggest a robot may be useful in this manner. Furthermore, we have found a significant relationship between people who hold negative attitudes toward robots and negative attitudes toward people with disabilities. © 2011 IEEE
Ethica ex machina: issues in roboethics.
Mushiaki, Shigeru
2013-12-01
Is "roboethics" the "ethics of humans" or the "ethics of robots"? According to the Roboethics Roadmap (Gianmarco Veruggio), it is the human ethics of robot designers, manufacturers, and users. And ifroboethics roots deeply in society, artificial ethics (ethics of robots) might be put on the agenda some day. At the 1st International Symposium on Roboethics in San Remo, Ronald C. Arkin gave the presentation "Bombs, Bonding, and Bondage: Human-Robot Interaction and Related Ethical Issues" (2004). "Bondage" is the issue of enslavement and possible rebellion of robots. "Bombs" is the issue of military use of robots. And "bonding" is the issue of affective, emotional attachment of humans to robots. I contrast two extreme attitudes towards the issue of "bonding" and propose a middle ground. "Anthropomorphism" has two meanings. First, it means "human-shaped-ness." Second, it means "attribution of human characteristics or feelings to a nonhuman being (god, animal, or object)" (personification, empathy). Some say that Japanese (or East Asians) hold "animism," which makes it easy for them to treat robots like animated beings (to anthropomorphize robots); hence "Robot Kingdom Japan." Cosima Wagner criticizes such exaggeration and oversimplification as "invented tradition". I reinforce her argument with neuroscientific findings and argue that such "animism" is neither Shintoistic nor Buddhistic, but a universal tendency. Roboticists, especially Japanese roboticists emphasize that robotics is "anthropology." It is true that through the construction of humanoid robots we can better understand human beings (so-called "constructive approach"). But at the same time, we must not forget that robotic technology, like any other technology, changes our way of living and being--deeply: it can bring about our ontological transformation. In this sense, the governance of robotic technology is "governed governance." The interdisciplinary research area of technology assessment studies (TAS) will gain much importance. And we should always be ready to rethink the direction of the research and development of robotic technology, bearing the desirable future of human society in mind.
Molecular Robots Obeying Asimov's Three Laws of Robotics.
Kaminka, Gal A; Spokoini-Stern, Rachel; Amir, Yaniv; Agmon, Noa; Bachelet, Ido
2017-01-01
Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920-1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story "Runaround," in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.
Sahaï, Aïsha; Pacherie, Elisabeth; Grynszpan, Ouriel; Berberian, Bruno
2017-01-01
Nowadays, interactions with others do not only involve human peers but also automated systems. Many studies suggest that the motor predictive systems that are engaged during action execution are also involved during joint actions with peers and during other human generated action observation. Indeed, the comparator model hypothesis suggests that the comparison between a predicted state and an estimated real state enables motor control, and by a similar functioning, understanding and anticipating observed actions. Such a mechanism allows making predictions about an ongoing action, and is essential to action regulation, especially during joint actions with peers. Interestingly, the same comparison process has been shown to be involved in the construction of an individual's sense of agency, both for self-generated and observed other human generated actions. However, the implication of such predictive mechanisms during interactions with machines is not consensual, probably due to the high heterogeneousness of the automata used in the experimentations, from very simplistic devices to full humanoid robots. The discrepancies that are observed during human/machine interactions could arise from the absence of action/observation matching abilities when interacting with traditional low-level automata. Consistently, the difficulties to build a joint agency with this kind of machines could stem from the same problem. In this context, we aim to review the studies investigating predictive mechanisms during social interactions with humans and with automated artificial systems. We will start by presenting human data that show the involvement of predictions in action control and in the sense of agency during social interactions. Thereafter, we will confront this literature with data from the robotic field. Finally, we will address the upcoming issues in the field of robotics related to automated systems aimed at acting as collaborative agents. PMID:29081744
Wang, Yin
2015-01-01
Notwithstanding the significant role that human–robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human–human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal–parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. PMID:25911418
Multi-tasking arbitration and behaviour design for human-interactive robots
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei
2013-05-01
Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.
Natural Speech Toward Humans and Intelligent Agents During a Simulated Search and Rescue Mission
2008-12-01
Eklundh, 2006). Research has been done on giving directions to robots, and the point of view that teammates normally attribute to them (Imai, Hiraki ...Bystander intervention as a resource in human-robot collaboration. Interaction Studies, 7(3), 455-477. Imai, M., Hiraki , K., Miyasato, T., Nakatsu, R
Mapping of unknown industrial plant using ROS-based navigation mobile robot
NASA Astrophysics Data System (ADS)
Priyandoko, G.; Ming, T. Y.; Achmad, M. S. H.
2017-10-01
This research examines how humans work with teleoperated unmanned mobile robot inspection in industrial plant area resulting 2D/3D map for further critical evaluation. This experiment focuses on two parts, the way human-robot doing remote interactions using robust method and the way robot perceives the environment surround as a 2D/3D perspective map. ROS (robot operating system) as a tool was utilized in the development and implementation during the research which comes up with robust data communication method in the form of messages and topics. RGBD SLAM performs the visual mapping function to construct 2D/3D map using Kinect sensor. The results showed that the mobile robot-based teleoperated system are successful to extend human perspective in term of remote surveillance in large area of industrial plant. It was concluded that the proposed work is robust solution for large mapping within an unknown construction building.
A survey on dielectric elastomer actuators for soft robots.
Gu, Guo-Ying; Zhu, Jian; Zhu, Li-Min; Zhu, Xiangyang
2017-01-23
Conventional industrial robots with the rigid actuation technology have made great progress for humans in the fields of automation assembly and manufacturing. With an increasing number of robots needing to interact with humans and unstructured environments, there is a need for soft robots capable of sustaining large deformation while inducing little pressure or damage when maneuvering through confined spaces. The emergence of soft robotics offers the prospect of applying soft actuators as artificial muscles in robots, replacing traditional rigid actuators. Dielectric elastomer actuators (DEAs) are recognized as one of the most promising soft actuation technologies due to the facts that: i) dielectric elastomers are kind of soft, motion-generating materials that resemble natural muscle of humans in terms of force, strain (displacement per unit length or area) and actuation pressure/density; ii) dielectric elastomers can produce large voltage-induced deformation. In this survey, we first introduce the so-called DEAs emphasizing the key points of working principle, key components and electromechanical modeling approaches. Then, different DEA-driven soft robots, including wearable/humanoid robots, walking/serpentine robots, flying robots and swimming robots, are reviewed. Lastly, we summarize the challenges and opportunities for the further studies in terms of mechanism design, dynamics modeling and autonomous control.
Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems
2015-05-19
algorithm based on Age-Fitness Pareto Optimization (AFPO) ([9]) with an additional user prefer- ence objective and a neural network-based user model, we...greater than 40, which is about 5 times further than any robot traveled in our experiments. 6 3.3 Methods The algorithm uses a client -server computational...architecture. The client here is an interactive pro- gram which takes a pair of controllers as input, simulates4 two copies of the robot with
Learning and adaptation: neural and behavioural mechanisms behind behaviour change
NASA Astrophysics Data System (ADS)
Lowe, Robert; Sandamirskaya, Yulia
2018-01-01
This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.
Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin
2013-09-28
Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the influence of the gravity on the direction of the movement. The GENTLE/A system was able to adapt so that the duration required to execute point-to-point movement was according to the leading or lagging performance of the user with respect to the robot. This adaptability could be useful in the clinical settings when stroke subjects interact with the system and could also serve as an assessment parameter across various interaction sessions. As the system adapts to user input, and as the task becomes easier through practice, the robot would auto-tune for more demanding and challenging interactions. The improvement in performance of the participants in an embedded environment when compared to a virtual environment also shows promise for clinical applicability, to be tested in due time. Studying the physiology of upper arm to understand the muscle groups involved, and their influence on various movements executed during this study forms a key part of our future work.
Cognitive patterns: giving autonomy some context
NASA Astrophysics Data System (ADS)
Dumond, Danielle; Stacy, Webb; Geyer, Alexandra; Rousseau, Jeffrey; Therrien, Mike
2013-05-01
Today's robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low, detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience and, therefore, do not become better with time or apply previous knowledge to new situations. With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor management while operating single robots, are also required to manage inter-robot interactions. To make the most use of robots in combat environments, it will be necessary to have the capability to assign them new missions (including providing them context information), and to have them report information about the environment they encounter as they proceed with their mission. The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components: Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns, match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator to focus on things other than the operation of the robot(s).
Rare Neural Correlations Implement Robotic Conditioning with Delayed Rewards and Disturbances
Soltoggio, Andrea; Lemme, Andre; Reinhart, Felix; Steil, Jochen J.
2013-01-01
Neural conditioning associates cues and actions with following rewards. The environments in which robots operate, however, are pervaded by a variety of disturbing stimuli and uncertain timing. In particular, variable reward delays make it difficult to reconstruct which previous actions are responsible for following rewards. Such an uncertainty is handled by biological neural networks, but represents a challenge for computational models, suggesting the lack of a satisfactory theory for robotic neural conditioning. The present study demonstrates the use of rare neural correlations in making correct associations between rewards and previous cues or actions. Rare correlations are functional in selecting sparse synapses to be eligible for later weight updates if a reward occurs. The repetition of this process singles out the associating and reward-triggering pathways, and thereby copes with distal rewards. The neural network displays macro-level classical and operant conditioning, which is demonstrated in an interactive real-life human-robot interaction. The proposed mechanism models realistic conditioning in humans and animals and implements similar behaviors in neuro-robotic platforms. PMID:23565092
Physical Student-Robot Interaction with the ETHZ Haptic Paddle
ERIC Educational Resources Information Center
Gassert, R.; Metzger, J.; Leuenberger, K.; Popp, W. L.; Tucker, M. R.; Vigaru, B.; Zimmermann, R.; Lambercy, O.
2013-01-01
Haptic paddles--low-cost one-degree-of-freedom force feedback devices--have been used with great success at several universities throughout the US to teach the basic concepts of dynamic systems and physical human-robot interaction (pHRI) to students. The ETHZ haptic paddle was developed for a new pHRI course offered in the undergraduate…
NASA Astrophysics Data System (ADS)
Ososky, Scott; Schuster, David; Jentsch, Florian; Fiore, Stephen; Shumaker, Randall; Lebiere, Christian; Kurup, Unmesh; Oh, Jean; Stentz, Anthony
2012-06-01
Current ground robots are largely employed via tele-operation and provide their operators with useful tools to extend reach, improve sensing, and avoid dangers. To move from robots that are useful as tools to truly synergistic human-robot teaming, however, will require not only greater technical capabilities among robots, but also a better understanding of the ways in which the principles of teamwork can be applied from exclusively human teams to mixed teams of humans and robots. In this respect, a core characteristic that enables successful human teams to coordinate shared tasks is their ability to create, maintain, and act on a shared understanding of the world and the roles of the team and its members in it. The team performance literature clearly points towards two important cornerstones for shared understanding of team members: mental models and situation awareness. These constructs have been investigated as products of teams as well; amongst teams, they are shared mental models and shared situation awareness. Consequently, we are studying how these two constructs can be measured and instantiated in human-robot teams. In this paper, we report results from three related efforts that are investigating process and performance outcomes for human robot teams. Our investigations include: (a) how human mental models of tasks and teams change whether a teammate is human, a service animal, or an advanced automated system; (b) how computer modeling can lead to mental models being instantiated and used in robots; (c) how we can simulate the interactions between human and future robotic teammates on the basis of changes in shared mental models and situation assessment.
Gácsi, Márta; Szakadát, Sára; Miklósi, Adám
2013-01-01
These studies are part of a project aiming to reveal relevant aspects of human-dog interactions, which could serve as a model to design successful human-robot interactions. Presently there are no successfully commercialized assistance robots, however, assistance dogs work efficiently as partners for persons with disabilities. In Study 1, we analyzed the cooperation of 32 assistance dog-owner dyads performing a carrying task. We revealed typical behavior sequences and also differences depending on the dyads' experiences and on whether the owner was a wheelchair user. In Study 2, we investigated dogs' responses to unforeseen difficulties during a retrieving task in two contexts. Dogs displayed specific communicative and displacement behaviors, and a strong commitment to execute the insoluble task. Questionnaire data from Study 3 confirmed that these behaviors could successfully attenuate owners' disappointment. Although owners anticipated the technical competence of future assistance robots to be moderate/high, they could not imagine robots as emotional companions, which negatively affected their acceptance ratings of future robotic assistants. We propose that assistance dogs' cooperative behaviors and problem solving strategies should inspire the development of the relevant functions and social behaviors of assistance robots with limited manual and verbal skills.
Learning compliant manipulation through kinesthetic and tactile human-robot interaction.
Kronander, Klas; Billard, Aude
2014-01-01
Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.
Mergner, Thomas; Lippi, Vittorio
2018-01-01
Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.
Mergner, Thomas; Lippi, Vittorio
2018-01-01
Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with “reactive” balancing of external disturbances and “proactive” balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot. PMID:29867428
Roberts, Luke; Park, Hae Won; Howard, Ayanna M
2012-01-01
Rehabilitation robots in home environments has the potential to dramatically improve quality of life for individuals who experience disabling circumstances due to injury or chronic health conditions. Unfortunately, although classes of robotic systems for rehabilitation exist, these devices are typically not designed for children. And since over 150 million children in the world live with a disability, this causes a unique challenge for deploying such robotics for this target demographic. To overcome this barrier, we discuss a system that uses a wireless arm glove input device to enable interaction with a robotic playmate during various play scenarios. Results from testing the system with 20 human subjects shows that the system has potential, but certain aspects need to be improved before deployment with children.
Interaction Challenges in Human-Robot Space Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Nourbakhsh, Illah
2005-01-01
In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.
NASA Technical Reports Server (NTRS)
Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.
1994-01-01
Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
Perception and Perspective in Robotics
2003-01-01
data, the bottom row shows the segmented views that are tized to just two luminance levels. The dark line cen- the best match with these prototypes. The...and Mataric , 1999) for one effort in the ate an active, developing, malleable perceptual system robotic domain). The human interacting with the robot...learning will be im- Robot s an Sys(ems,)volumenI,pleentd. heinstructor demonstrates the task while Goldberg, D). and Mataric , M. 1. (1999
Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations
NASA Technical Reports Server (NTRS)
Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana
2011-01-01
We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.
Broadbent, Elizabeth; Kumar, Vinayak; Li, Xingyan; Sollers, John; Stafford, Rebecca Q; MacDonald, Bruce A; Wegner, Daniel M
2013-01-01
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users' perceptions of the robot's personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot's mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot's mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot's face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot's personality. Designers should be aware that the face on a robot's display screen can affect both the perceived mind and personality of the robot.
A new approach of active compliance control via fuzzy logic control for multifingered robot hand
NASA Astrophysics Data System (ADS)
Jamil, M. F. A.; Jalani, J.; Ahmad, A.
2016-07-01
Safety is a vital issue in Human-Robot Interaction (HRI). In order to guarantee safety in HRI, a model reference impedance control can be a very useful approach introducing a compliant control. In particular, this paper establishes a fuzzy logic compliance control (i.e. active compliance control) to reduce impact and forces during physical interaction between humans/objects and robots. Exploiting a virtual mass-spring-damper system allows us to determine a desired compliant level by understanding the behavior of the model reference impedance control. The performance of fuzzy logic compliant control is tested in simulation for a robotic hand known as the RED Hand. The results show that the fuzzy logic is a feasible control approach, particularly to control position and to provide compliant control. In addition, the fuzzy logic control allows us to simplify the controller design process (i.e. avoid complex computation) when dealing with nonlinearities and uncertainties.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo
2015-01-01
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.
Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo
2015-01-01
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967
Teaching Human Poses Interactively to a Social Robot
Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A.
2013-01-01
The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics. PMID:24048336
Teaching human poses interactively to a social robot.
Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A
2013-09-17
The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.
Human-robot interaction: kinematics and muscle activity inside a powered compliant knee exoskeleton.
Knaepen, Kristel; Beyl, Pieter; Duerinck, Saartje; Hagman, Friso; Lefeber, Dirk; Meeusen, Romain
2014-11-01
Until today it is not entirely clear how humans interact with automated gait rehabilitation devices and how we can, based on that interaction, maximize the effectiveness of these exoskeletons. The goal of this study was to gain knowledge on the human-robot interaction, in terms of kinematics and muscle activity, between a healthy human motor system and a powered knee exoskeleton (i.e., KNEXO). Therefore, temporal and spatial gait parameters, human joint kinematics, exoskeleton kinetics and muscle activity during four different walking trials in 10 healthy male subjects were studied. Healthy subjects can walk with KNEXO in patient-in-charge mode with some slight constraints in kinematics and muscle activity primarily due to inertia of the device. Yet, during robot-in-charge walking the muscular constraints are reversed by adding positive power to the leg swing, compensating in part this inertia. Next to that, KNEXO accurately records and replays the right knee kinematics meaning that subject-specific trajectories can be implemented as a target trajectory during assisted walking. No significant differences in the human response to the interaction with KNEXO in low and high compliant assistance could be pointed out. This is in contradiction with our hypothesis that muscle activity would decrease with increasing assistance. It seems that the differences between the parameter settings of low and high compliant control might not be sufficient to observe clear effects in healthy subjects. Moreover, we should take into account that KNEXO is a unilateral, 1 degree-of-freedom device.
Sharp, Ian; Patton, James; Listenberger, Molly; Case, Emily
2011-08-08
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Can a Humanoid Face be Expressive? A Psychophysiological Investigation
Lazzeri, Nicole; Mazzei, Daniele; Greco, Alberto; Rotesi, Annalisa; Lanatà, Antonio; De Rossi, Danilo Emilio
2015-01-01
Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models. PMID:26075199
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra
2010-01-01
Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777
Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction
Cruz Zurian, Heber; Atefi, Seyed Reza; Seoane Martinez, Fernando; Lukowicz, Paul
2017-01-01
In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers. PMID:29120389
Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction.
Zhou, Bo; Altamirano, Carlos Andres Velez; Zurian, Heber Cruz; Atefi, Seyed Reza; Billing, Erik; Martinez, Fernando Seoane; Lukowicz, Paul
2017-11-09
In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm 2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments' contribution. The best performing feature-classifier combination can recognize the gestures with a 93 . 3 % accuracy from a known group of participants, and 89 . 1 % from strangers.
A Novel Concept for Safe, Stiffness-Controllable Robot Links.
Stilli, Agostino; Wurdemann, Helge A; Althoefer, Kaspar
2017-03-01
The recent decade has seen an astounding increase of interest and advancement in a new field of robotics, aimed at creating structures specifically for the safe interaction with humans. Softness, flexibility, and variable stiffness in robotics have been recognized as highly desirable characteristics for many applications. A number of solutions were proposed ranging from entirely soft robots (such as those composed mainly from soft materials such as silicone), via flexible continuum and snake-like robots, to rigid-link robots enhanced by joints that exhibit an elastic behavior either implemented in hardware or achieved purely by means of intelligent control. Although these are very good solutions paving the path to safe human-robot interaction, we propose here a new approach that focuses on creating stiffness controllability for the linkages between the robot joints. This article proposes a replacement for the traditionally rigid robot link-the new link is equipped with an additional capability of stiffness controllability. With this added feature, a robot can accurately carry out manipulation tasks (high stiffness), but can virtually instantaneously reduce its stiffness when a human is nearby or in contact with the robot. The key point of the invention described here is a robot link made of an airtight chamber formed by a soft and flexible, but high-strain resistant combination of a plastic mesh and silicone wall. Inflated with air to a high pressure, the mesh silicone chamber behaves like a rigid link; reducing the air pressure, softens the link and rendering the robot structure safe. This article investigates a number of link prototypes and shows the feasibility of the new concept. Stiffness tests have been performed, showing that a significant level of stiffness can be achieved-up to 40 N reaction force along the axial direction, for a 25-mm-diameter sample at 60 kPa, at an axial deformation of 5 mm. The results confirm that this novel concept to linkages for robot manipulators exhibits the beam-like behavior of traditional rigid links when fully pressurized and significantly reduced stiffness at low pressure. The proposed concept has the potential to easily create safe robots, augmenting traditional robot designs.
Broz, Frank; Nehaniv, Chrystopher L; Belpaeme, Tony; Bisio, Ambra; Dautenhahn, Kerstin; Fadiga, Luciano; Ferrauto, Tomassino; Fischer, Kerstin; Förster, Frank; Gigliotta, Onofrio; Griffiths, Sascha; Lehmann, Hagen; Lohan, Katrin S; Lyon, Caroline; Marocco, Davide; Massera, Gianluca; Metta, Giorgio; Mohan, Vishwanathan; Morse, Anthony; Nolfi, Stefano; Nori, Francesco; Peniak, Martin; Pitsch, Karola; Rohlfing, Katharina J; Sagerer, Gerhard; Sato, Yo; Saunders, Joe; Schillingmann, Lars; Sciutti, Alessandra; Tikhanoff, Vadim; Wrede, Britta; Zeschel, Arne; Cangelosi, Angelo
2014-07-01
This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots. Copyright © 2014 Cognitive Science Society, Inc.
Human Assisted Robotic Vehicle Studies - A conceptual end-to-end mission architecture
NASA Astrophysics Data System (ADS)
Lehner, B. A. E.; Mazzotta, D. G.; Teeney, L.; Spina, F.; Filosa, A.; Pou, A. Canals; Schlechten, J.; Campbell, S.; Soriano, P. López
2017-11-01
With current space exploration roadmaps indicating the Moon as a proving ground on the way to human exploration of Mars, it is clear that human-robotic partnerships will play a key role for successful future human space missions. This paper details a conceptual end-to-end architecture for an exploration mission in cis-lunar space with a focus on human-robot interactions, called Human Assisted Robotic Vehicle Studies (HARVeSt). HARVeSt will build on knowledge of plant growth in space gained from experiments on-board the ISS and test the first growth of plants on the Moon. A planned deep space habitat will be utilised as the base of operations for human-robotic elements of the mission. The mission will serve as a technology demonstrator not only for autonomous tele-operations in cis-lunar space but also for key enabling technologies for future human surface missions. The successful approach of the ISS will be built on in this mission with international cooperation. Mission assets such as a modular rover will allow for an extendable mission and to scout and prepare the area for the start of an international Moon Village.
Proactive learning for artificial cognitive systems
NASA Astrophysics Data System (ADS)
Lee, Soo-Young
2010-04-01
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
A physical model of sensorimotor interactions during locomotion
NASA Astrophysics Data System (ADS)
Klein, Theresa J.; Lewis, M. Anthony
2012-08-01
In this paper, we describe the development of a bipedal robot that models the neuromuscular architecture of human walking. The body is based on principles derived from human muscular architecture, using muscles on straps to mimic agonist/antagonist muscle action as well as bifunctional muscles. Load sensors in the straps model Golgi tendon organs. The neural architecture is a central pattern generator (CPG) composed of a half-center oscillator combined with phase-modulated reflexes that is simulated using a spiking neural network. We show that the interaction between the reflex system, body dynamics and CPG results in a walking cycle that is entrained to the dynamics of the system. We also show that the CPG helped stabilize the gait against perturbations relative to a purely reflexive system, and compared the joint trajectories to human walking data. This robot represents a complete physical, or ‘neurorobotic’, model of the system, demonstrating the usefulness of this type of robotics research for investigating the neurophysiological processes underlying walking in humans and animals.
A robotic voice simulator and the interactive training for hearing-impaired people.
Sawada, Hideyuki; Kitani, Mitsuki; Hayashi, Yasumori
2008-01-01
A talking and singing robot which adaptively learns the vocalization skill by means of an auditory feedback learning algorithm is being developed. The robot consists of motor-controlled vocal organs such as vocal cords, a vocal tract and a nasal cavity to generate a natural voice imitating a human vocalization. In this study, the robot is applied to the training system of speech articulation for the hearing-impaired, because the robot is able to reproduce their vocalization and to teach them how it is to be improved to generate clear speech. The paper briefly introduces the mechanical construction of the robot and how it autonomously acquires the vocalization skill in the auditory feedback learning by listening to human speech. Then the training system is described, together with the evaluation of the speech training by auditory impaired people.
NASA Astrophysics Data System (ADS)
Tay, T. T.; Low, Raymond; Loke, H. J.; Chua, Y. L.; Goh, Y. H.
2018-04-01
The proliferation of robotic technologies in recent years brings robots closer to humanities. There are many researches on going at various stages of development to bring robots into our homes, schools, nurseries, elderly care centres, offices, hospitals and factories. With recently developed robots having tendency to have appearance which increasingly displaying similarities to household animals and humans, there is a need to study the existence of uncanny valley phenomenon. Generally, the acceptance of people toward robots increases as the robots acquire increasing similarities to human features until a stage where people feel very uncomfortable, eerie, fear and disgust when the robot appearance become almost human like but not yet human. This phenomenon called uncanny valley was first reported by Masahiro Mori. There are numerous researches conducted to measure the existence of uncanny valley in Japan and European countries. However, there is limited research reported on uncanny valley phenomenon in Malaysia so far. In view of the different cultural background and exposure of Malaysian population to robotics technology compared to European or East Asian populations, it is worth to study this phenomenon in Malaysian context. The main aim of this work is to conduct a preliminary study to determine the existence of uncanny valley phenomenon in Malaysian urban and rural populations. It is interesting to find if there are any differences in the acceptance of the two set of populations despite of their differences. Among others the urban and rural populations differ in term of the rate of urbanization and exposure to latest technologies. A set of four interactive robotic faces and an ideal human model representing the fifth robot are used in this study. The robots have features resembling a cute animal, cartoon character, typical robot and human-like. Questionnaire surveys are conducted on respondents from urban and rural populations. Survey data collected are analysed to determine the preferred features in a humanoid robot, the acceptance of respondents toward the robotic faces and the existence of uncanny valley phenomenon. Based on the limited study, it is found that the uncanny valley phenomenon existed in both the Malaysian urban and rural population.
Design of a simulation environment for laboratory management by robot organizations
NASA Technical Reports Server (NTRS)
Zeigler, Bernard P.; Cellier, Francois E.; Rozenblit, Jerzy W.
1988-01-01
This paper describes the basic concepts needed for a simulation environment capable of supporting the design of robot organizations for managing chemical, or similar, laboratories on the planned U.S. Space Station. The environment should facilitate a thorough study of the problems to be encountered in assigning the responsibility of managing a non-life-critical, but mission valuable, process to an organized group of robots. In the first phase of the work, we seek to employ the simulation environment to develop robot cognitive systems and strategies for effective multi-robot management of chemical experiments. Later phases will explore human-robot interaction and development of robot autonomy.
Analyzing Robotic Kinematics Via Computed Simulations
NASA Technical Reports Server (NTRS)
Carnahan, Timothy M.
1992-01-01
Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.
Lai, Ying-Chih; Deng, Jianan; Liu, Ruiyuan; Hsiao, Yung-Chi; Zhang, Steven L; Peng, Wenbo; Wu, Hsing-Mei; Wang, Xingfu; Wang, Zhong Lin
2018-06-04
Robots that can move, feel, and respond like organisms will bring revolutionary impact to today's technologies. Soft robots with organism-like adaptive bodies have shown great potential in vast robot-human and robot-environment applications. Developing skin-like sensory devices allows them to naturally sense and interact with environment. Also, it would be better if the capabilities to feel can be active, like real skin. However, challenges in the complicated structures, incompatible moduli, poor stretchability and sensitivity, large driving voltage, and power dissipation hinder applicability of conventional technologies. Here, various actively perceivable and responsive soft robots are enabled by self-powered active triboelectric robotic skins (tribo-skins) that simultaneously possess excellent stretchability and excellent sensitivity in the low-pressure regime. The tribo-skins can actively sense proximity, contact, and pressure to external stimuli via self-generating electricity. The driving energy comes from a natural triboelectrification effect involving the cooperation of contact electrification and electrostatic induction. The perfect integration of the tribo-skins and soft actuators enables soft robots to perform various actively sensing and interactive tasks including actively perceiving their muscle motions, working states, textile's dampness, and even subtle human physiological signals. Moreover, the self-generating signals can drive optoelectronic devices for visual communication and be processed for diverse sophisticated uses. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Research on wheelchair robot control system based on EOG
NASA Astrophysics Data System (ADS)
Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo
2018-04-01
The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.
Working and Learning with Knowledge in the Lobes of a Humanoid's Mind
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Savely, Robert; Bluethmann, William; Kortenkamp, David
2003-01-01
Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association).
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
Ong, Carmichael F; Hicks, Jennifer L; Delp, Scott L
2016-05-01
Technologies that augment human performance are the focus of intensive research and development, driven by advances in wearable robotic systems. Success has been limited by the challenge of understanding human-robot interaction. To address this challenge, we developed an optimization framework to synthesize a realistic human standing long jump and used the framework to explore how simulated wearable robotic devices might enhance jump performance. A planar, five-segment, seven-degree-of-freedom model with physiological torque actuators, which have variable torque capacity depending on joint position and velocity, was used to represent human musculoskeletal dynamics. An active augmentation device was modeled as a torque actuator that could apply a single pulse of up to 100 Nm of extension torque. A passive design was modeled as rotational springs about each lower limb joint. Dynamic optimization searched for physiological and device actuation patterns to maximize jump distance. Optimization of the nominal case yielded a 2.27 m jump that captured salient kinematic and kinetic features of human jumps. When the active device was added to the ankle, knee, or hip, jump distance increased to between 2.49 and 2.52 m. Active augmentation of all three joints increased the jump distance to 3.10 m. The passive design increased jump distance to 3.32 m by adding torques of 135, 365, and 297 Nm to the ankle, knee, and hip, respectively. Dynamic optimization can be used to simulate a standing long jump and investigate human-robot interaction. Simulation can aid in the design of performance-enhancing technologies.
Virtual Presence: One Step Beyond Reality
NASA Technical Reports Server (NTRS)
Budden, Nancy Ann
1997-01-01
Our primary objective was to team up a group consisting of scientists and engineers from two different NASA cultures, and simulate an interactive teleoperated robot conducting geologic field work on the Moon or Mars. The information derived from the experiment will benefit both the robotics team and the planetary exploration team in the areas of robot design and development, and mission planning and analysis. The Earth Sciences and Space and Life Sciences Division combines the past with the future contributing experience from Apollo crews exploring the lunar surface, knowledge of reduced gravity environments, the performance limits of EVA suits, and future goals for human exploration beyond low Earth orbit. The Automation, Robotics. and Simulation Division brings to the table the technical expertise of robotic systems, the future goals of highly interactive robotic capabilities, treading on the edge of technology by joining for the first time a unique combination of telepresence with virtual reality.
A Pneumatic Tactile Sensor for Co-Operative Robots
He, Rui; Yu, Jianjun; Zuo, Guoyu
2017-01-01
Tactile sensors of comprehensive functions are urgently needed for the advanced robot to co-exist and co-operate with human beings. Pneumatic tactile sensors based on air bladder possess some noticeable advantages for human-robot interaction application. In this paper, we construct a pneumatic tactile sensor and apply it on the fingertip of robot hand to realize the sensing of force, vibration and slippage via the change of the pressure of the air bladder, and we utilize the sensor to perceive the object’s features such as softness and roughness. The pneumatic tactile sensor has good linearity, repeatability and low hysteresis and both its size and sensing range can be customized by using different material as well as different thicknesses of the air bladder. It is also simple and cheap to fabricate. Therefore, the pneumatic tactile sensor is suitable for the application of co-operative robots and can be widely utilized to improve the performance of service robots. We can apply it to the fingertip of the robot to endow the robotic hand with the ability to co-operate with humans and handle the fragile objects because of the inherent compliance of the air bladder. PMID:29125565
Broadbent, Elizabeth; Kumar, Vinayak; Li, Xingyan; Sollers, John; Stafford, Rebecca Q.; MacDonald, Bruce A.; Wegner, Daniel M.
2013-01-01
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot. PMID:24015263
Child-Robot Interactions for Second Language Tutoring to Preschool Children
Vogt, Paul; de Haas, Mirjam; de Jong, Chiara; Baxter, Peta; Krahmer, Emiel
2017-01-01
In this digital age social robots will increasingly be used for educational purposes, such as second language tutoring. In this perspective article, we propose a number of design features to develop a child-friendly social robot that can effectively support children in second language learning, and we discuss some technical challenges for developing these. The features we propose include choices to develop the robot such that it can act as a peer to motivate the child during second language learning and build trust at the same time, while still being more knowledgeable than the child and scaffolding that knowledge in adult-like manner. We also believe that the first impressions children have about robots are crucial for them to build trust and common ground, which would support child-robot interactions in the long term. We therefore propose a strategy to introduce the robot in a safe way to toddlers. Other features relate to the ability to adapt to individual children’s language proficiency, respond contingently, both temporally and semantically, establish joint attention, use meaningful gestures, provide effective feedback and monitor children’s learning progress. Technical challenges we observe include automatic speech recognition (ASR) for children, reliable object recognition to facilitate semantic contingency and establishing joint attention, and developing human-like gestures with a robot that does not have the same morphology humans have. We briefly discuss an experiment in which we investigate how children respond to different forms of feedback the robot can give. PMID:28303094
Child-Robot Interactions for Second Language Tutoring to Preschool Children.
Vogt, Paul; de Haas, Mirjam; de Jong, Chiara; Baxter, Peta; Krahmer, Emiel
2017-01-01
In this digital age social robots will increasingly be used for educational purposes, such as second language tutoring. In this perspective article, we propose a number of design features to develop a child-friendly social robot that can effectively support children in second language learning, and we discuss some technical challenges for developing these. The features we propose include choices to develop the robot such that it can act as a peer to motivate the child during second language learning and build trust at the same time, while still being more knowledgeable than the child and scaffolding that knowledge in adult-like manner. We also believe that the first impressions children have about robots are crucial for them to build trust and common ground, which would support child-robot interactions in the long term. We therefore propose a strategy to introduce the robot in a safe way to toddlers. Other features relate to the ability to adapt to individual children's language proficiency, respond contingently, both temporally and semantically, establish joint attention, use meaningful gestures, provide effective feedback and monitor children's learning progress. Technical challenges we observe include automatic speech recognition (ASR) for children, reliable object recognition to facilitate semantic contingency and establishing joint attention, and developing human-like gestures with a robot that does not have the same morphology humans have. We briefly discuss an experiment in which we investigate how children respond to different forms of feedback the robot can give.
Communication and knowledge sharing in human-robot interaction and learning from demonstration.
Koenig, Nathan; Takayama, Leila; Matarić, Maja
2010-01-01
Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need to be able to accomplish a wide range of tasks in diverse conditions. Learning these tasks from demonstrations offers a convenient mechanism to customize and train a robot by transferring task related knowledge from a user to a robot. This avoids the time-consuming and complex process of manual programming. The way in which the user interacts with a robot during a demonstration plays a vital role in terms of how effectively and accurately the user is able to provide a demonstration. Teaching through demonstrations is a social activity, one that requires bidirectional communication between a teacher and a student. The work described in this paper studies how the user's visual observation of the robot and the robot's auditory cues affect the user's ability to teach the robot in a social setting. Results show that auditory cues provide important knowledge about the robot's internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot's movements. Copyright © 2010. Published by Elsevier Ltd.
Development of a skin for intuitive interaction with an assistive robot.
Markham, Heather C; Brewer, Bambi R
2009-01-01
Assistive robots for persons with physical limitations need to interact with humans in a manner that is safe to the user and the environment. Early work in this field centered on task specific robots. Recent work has focused on the use of the MANUS ARM and the development of different interfaces. The most intuitive interaction with an object is through touch. By creating a skin for the robot arm which will directly control its movement compliance, we have developed a novel and intuitive method of interaction. This paper describes the development of a skin which acts as a switch. When activated through touch, the skin will put the arm into compliant mode allowing it to be moved to the desired location safely, and when released will put the robot into non-compliant mode thereby keeping it in place. We investigated four conductive materials and four insulators, selecting the best combination based on our design goals of the need for a continuous activation surface, the least amount of force required for skin activation, and the most consistent voltage change between the conductive surfaces measured during activation.
Natural Tasking of Robots Based on Human Interaction Cues
2005-06-01
MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Infant-Like Social Interactions Between a Robot and a Human Caregiver
2006-01-01
developmental psychology and ethology, as well as the applications of these fields to robotics as outlined by Brooks, Ferrell, Irie, Marjanovic ...Irie, R., Kemp, C. C., Marjanovic , M., Scassellati, B., & Williamson M. (1998). Alternative essences of intelligence. Proceedings of the Fifteenth
Action and language integration: from humans to cognitive robots.
Borghi, Anna M; Cangelosi, Angelo
2014-07-01
The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1992-01-01
The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.
Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot.
Greer, Joseph D; Morimoto, Tania K; Okamura, Allison M; Hawkes, Elliot W
2017-01-01
We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot's pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds.
Towards the Verification of Human-Robot Teams
NASA Technical Reports Server (NTRS)
Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.
2005-01-01
Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.
Warren, Zachary; Muramatsu, Taro; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Miyao, Masutomo; Nakano, Mitsuko; Mizushima, Sakae; Wakita, Yujin; Ishiguro, Hiroshi; Mimura, Masaru; Minabe, Yoshio; Kikuchi, Mitsuru
2017-01-01
Recent rapid technological advances have enabled robots to fulfill a variety of human-like functions, leading researchers to propose the use of such technology for the development and subsequent validation of interventions for individuals with autism spectrum disorder (ASD). Although a variety of robots have been proposed as possible therapeutic tools, the physical appearances of humanoid robots currently used in therapy with these patients are highly varied. Very little is known about how these varied designs are experienced by individuals with ASD. In this study, we systematically evaluated preferences regarding robot appearance in a group of 16 individuals with ASD (ages 10–17). Our data suggest that there may be important differences in preference for different types of robots that vary according to interaction type for individuals with ASD. Specifically, within our pilot sample, children with higher-levels of reported ASD symptomatology reported a preference for specific humanoid robots to those perceived as more mechanical or mascot-like. The findings of this pilot study suggest that preferences and reactions to robotic interactions may vary tremendously across individuals with ASD. Future work should evaluate how such differences may be systematically measured and potentially harnessed to facilitate meaningful interactive and intervention paradigms. PMID:29028837
Human-robot interaction tests on a novel robot for gait assistance.
Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio
2013-06-01
This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.
What Role for Emotions in Cooperating Robots? - The Case of RH3-Y
NASA Astrophysics Data System (ADS)
Dessimoz, Jean-Daniel; Gauthey, Pierre-François
The paper reviews key aspects of emotions in the context of cooperating robots (mostly, robots cooperating with humans), and gives numerous concrete examples from RH-Y robots. Emotions have been first systematically studied in relation to human expressions, and then the shift has come towards a machine-based replication. Emotions appear to result from changes, from convergence or deviation between status and goals; they trigger appropriate activities, are commonly represented in 2D or 3D affect space, and can be made visible by facial expressions. While specific devices are sometimes created, emotive expressions seem to be conveniently rendered by a set of facial images or more simply by some icons; they can also possibly be parameterized in a few dimensions for continuous modulation. In fact however, internal forces for activities and changes may be expressed in many ways other than faces: screens, panels, and operational behaviors. Relying on emotions ensures useful aspects, such as experience reuse, legibility or communication. But it also includes limits such as due to the nature of robots, of interactive media, and even of the very domain of emotions. For our goal, the design of effective and efficient, cooperating robots, in domestic applications, communication and interaction play key roles; best practices become evident after experimental verification; and our experience gained so far, over 10 years and more, points at a variety of successful strategic attitudes and expression modes, much beyond classic human emotions and facial or iconic images.
Characteristics of Behavior of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Sato, Shigehiko; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.
Terada, Kazunori; Takeuchi, Chikara
2017-01-01
In the present study, we investigated whether expressing emotional states using a simple line drawing to represent a robot's face can serve to elicit altruistic behavior from humans. An experimental investigation was conducted in which human participants interacted with a humanoid robot whose facial expression was shown on an LCD monitor that was mounted as its head (Study 1). Participants were asked to play the ultimatum game, which is usually used to measure human altruistic behavior. All participants were assigned to be the proposer and were instructed to decide their offer within 1 min by controlling a slider bar. The corners of the robot's mouth, as indicated by the line drawing, simply moved upward, or downward depending on the position of the slider bar. The results suggest that the change in the facial expression depicted by a simple line drawing of a face significantly affected the participant's final offer in the ultimatum game. The offers were increased by 13% when subjects were shown contingent changes of facial expression. The results were compared with an experiment in a teleoperation setting in which participants interacted with another person through a computer display showing the same line drawings used in Study 1 (Study 2). The results showed that offers were 15% higher if participants were shown a contingent facial expression change. Together, Studies 1 and 2 indicate that emotional expression in simple line drawings of a robot's face elicits the same higher offer from humans as a human telepresence does.
Terada, Kazunori; Takeuchi, Chikara
2017-01-01
In the present study, we investigated whether expressing emotional states using a simple line drawing to represent a robot's face can serve to elicit altruistic behavior from humans. An experimental investigation was conducted in which human participants interacted with a humanoid robot whose facial expression was shown on an LCD monitor that was mounted as its head (Study 1). Participants were asked to play the ultimatum game, which is usually used to measure human altruistic behavior. All participants were assigned to be the proposer and were instructed to decide their offer within 1 min by controlling a slider bar. The corners of the robot's mouth, as indicated by the line drawing, simply moved upward, or downward depending on the position of the slider bar. The results suggest that the change in the facial expression depicted by a simple line drawing of a face significantly affected the participant's final offer in the ultimatum game. The offers were increased by 13% when subjects were shown contingent changes of facial expression. The results were compared with an experiment in a teleoperation setting in which participants interacted with another person through a computer display showing the same line drawings used in Study 1 (Study 2). The results showed that offers were 15% higher if participants were shown a contingent facial expression change. Together, Studies 1 and 2 indicate that emotional expression in simple line drawings of a robot's face elicits the same higher offer from humans as a human telepresence does. PMID:28588520
In good company? Perception of movement synchrony of a non-anthropomorphic robot.
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot's likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants' perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.
Robonaut 2 and You: Specifying and Executing Complex Operations
NASA Technical Reports Server (NTRS)
Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia
2017-01-01
Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.
Soldier’s Load and the Multifunctional Utility/Logistics and Equipment-Transport
2010-06-11
Utility/Logistics Equipment-Countermine and an armed reconnaissance variant called the Armed Robotic Vehicle-Assault (Light). All three Lockheed...mission and requires various levels of human- robot interaction (National Institute of Standards and Technology 2004, 14). Teleoperation. A mode of...use of robots as an act of cowardice, especially in cultures which hold in high esteem the nobility of sacrificing oneself for a higher purpose (Singer
Human-Robot Interaction: A Survey
2007-01-01
breaks with the monolithic sense- plan -act loop of a centralized system, and instead uses distributed sense-response loops to generate appropriate...one of the first modern robots, cour- tesy of SRI International, Menlo Park, CA [279]; Kismet — an anthropomorphic robot with exaggerated emotion...linguis- tics. A common autonomy approach is sometimes referred to as the sense- plan -act model of decision-making [196]. This model has been a target
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
Wang, Likun; Du, Zhijiang; Dong, Wei; Shen, Yi; Zhao, Guangyu
2018-03-19
To achieve strength augmentation, endurance enhancement, and human assistance in a functional autonomous exoskeleton, control precision, back drivability, low output impedance, and mechanical compactness are desired. In our previous work, two elastic modules were designed for human-robot interaction sensing and compliant control, respectively. According to the intrinsic sensing properties of the elastic module, in this paper, only one compact elastic module is applied to realize both purposes. Thus, the corresponding control strategy is required and evolving internal model control is proposed to address this issue. Moreover, the input signal to the controller is derived from the deflection of the compact elastic module. The human-robot interaction is considered as the disturbance which is approximated by the output error between the exoskeleton control plant and evolving forward learning model. Finally, to verify our proposed control scheme, several experiments are conducted with our robotic exoskeleton system. The experiment shows a satisfying result and promising application feasibility.
Social interaction enhances motor resonance for observed human actions.
Hogeveen, Jeremy; Obhi, Sukhvinder S
2012-04-25
Understanding the neural basis of social behavior has become an important goal for cognitive neuroscience and a key aim is to link neural processes observed in the laboratory to more naturalistic social behaviors in real-world contexts. Although it is accepted that mirror mechanisms contribute to the occurrence of motor resonance (MR) and are common to action execution, observation, and imitation, questions remain about mirror (and MR) involvement in real social behavior and in processing nonhuman actions. To determine whether social interaction primes the MR system, groups of participants engaged or did not engage in a social interaction before observing human or robotic actions. During observation, MR was assessed via motor-evoked potentials elicited with transcranial magnetic stimulation. Compared with participants who did not engage in a prior social interaction, participants who engaged in the social interaction showed a significant increase in MR for human actions. In contrast, social interaction did not increase MR for robot actions. Thus, naturalistic social interaction and laboratory action observation tasks appear to involve common MR mechanisms, and recent experience tunes the system to particular agent types.
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
Gandarias, Juan M; Gómez-de-Gabriel, Jesús M; García-Cerezo, Alfonso J
2018-02-26
The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.
Piezoresistive pressure sensor array for robotic skin
NASA Astrophysics Data System (ADS)
Mirza, Fahad; Sahasrabuddhe, Ritvij R.; Baptist, Joshua R.; Wijesundara, Muthu B. J.; Lee, Woo H.; Popa, Dan O.
2016-05-01
Robots are starting to transition from the confines of the manufacturing floor to homes, schools, hospitals, and highly dynamic environments. As, a result, it is impossible to foresee all the probable operational situations of robots, and preprogram the robot behavior in those situations. Among human-robot interaction technologies, haptic communication is an intuitive physical interaction method that can help define operational behaviors for robots cooperating with humans. Multimodal robotic skin with distributed sensors can help robots increase perception capabilities of their surrounding environments. Electro-Hydro-Dynamic (EHD) printing is a flexible multi-modal sensor fabrication method because of its direct printing capability of a wide range of materials onto substrates with non-uniform topographies. In past work we designed interdigitated comb electrodes as a sensing element and printed piezoresistive strain sensors using customized EHD printable PEDOT:PSS based inks. We formulated a PEDOT:PSS derivative ink, by mixing PEDOT:PSS and DMSO. Bending induced characterization tests of prototyped sensors showed high sensitivity and sufficient stability. In this paper, we describe SkinCells, robot skin sensor arrays integrated with electronic modules. 4x4 EHD-printed arrays of strain sensors was packaged onto Kapton sheets and silicone encapsulant and interconnected to a custom electronic module that consists of a microcontroller, Wheatstone bridge with adjustable digital potentiometer, multiplexer, and serial communication unit. Thus, SkinCell's electronics can be used for signal acquisition, conditioning, and networking between sensor modules. Several SkinCells were loaded with controlled pressure, temperature and humidity testing apparatuses, and testing results are reported in this paper.
In Good Company? Perception of Movement Synchrony of a Non-Anthropomorphic Robot
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot’s likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants’ perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot. PMID:26001025
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot
Greer, Joseph D.; Morimoto, Tania K.; Okamura, Allison M.; Hawkes, Elliot W.
2017-01-01
We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot’s pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds. PMID:29379672
Defining Soldier Intent in a Human-Robot Natural Language Interaction Context
2017-10-01
this burden on the human and expand the scope of human–robot operations, this project investigates fundamental research issues in the autonomous...attempted to devise a quantitative metric for the Shared Interpretation of Commander’s Intent (SICI). The authors’ background research indicated that...Another interesting set of results were the cases where the battalion and company commanders disagreed on the meaning of key terms, such as “delay”, which
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales
NASA Astrophysics Data System (ADS)
Peschel, J.; Young, S. N.
2017-12-01
The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Parisi, Domenico
2010-01-01
Trying to understand human language by constructing robots that have language necessarily implies an embodied view of language, where the meaning of linguistic expressions is derived from the physical interactions of the organism with the environment. The paper describes a neural model of language according to which the robot's behaviour is controlled by a neural network composed of two sub-networks, one dedicated to the non-linguistic interactions of the robot with the environment and the other one to processing linguistic input and producing linguistic output. We present the results of a number of simulations using the model and we suggest how the model can be used to account for various language-related phenomena such as disambiguation, the metaphorical use of words, the pervasive idiomaticity of multi-word expressions, and mental life as talking to oneself. The model implies a view of the meaning of words and multi-word expressions as a temporal process that takes place in the entire brain and has no clearly defined boundaries. The model can also be extended to emotional words if we assume that an embodied view of language includes not only the interactions of the robot's brain with the external environment but also the interactions of the brain with what is inside the body.
Rasheed, Nadia; Amin, Shamsudin H M
2016-01-01
Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue.
Rasheed, Nadia; Amin, Shamsudin H. M.
2016-01-01
Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue. PMID:27069470
Gerłowska, Justyna; Skrobas, Urszula; Grabowska-Aleksandrowicz, Katarzyna; Korchut, Agnieszka; Szklener, Sebastian; Szczęśniak-Stańczyk, Dorota; Tzovaras, Dimitrios; Rejdak, Konrad
2018-01-01
The aim of the present study is to present the results of the assessment of clinical application of the robotic assistant for patients suffering from mild cognitive impairments (MCI) and Alzheimer Disease (AD). The human-robot interaction (HRI) evaluation approach taken within the study is a novelty in the field of social robotics. The proposed assessment of the robotic functionalities are based on end-user perception of attractiveness, usability and potential societal impact of the device. The methods of evaluation applied consist of User Experience Questionnaire (UEQ), AttrakDiff and the societal impact inventory tailored for the project purposes. The prototype version of the Robotic Assistant for MCI patients at Home (RAMCIP) was tested in a semi-controlled environment at the Department of Neurology (Lublin, Poland). Eighteen elderly participants, 10 healthy and 8 MCI, performed everyday tasks and functions facilitated by RAMCIP. The tasks consisted of semi-structuralized scenarios like: medication intake, hazardous events prevention, and social interaction. No differences between the groups of subjects were observed in terms of perceived attractiveness, usability nor-societal impact of the device. The robotic assistant societal impact and attractiveness were highly assessed. The usability of the device was reported as neutral due to the short time of interaction.
Gerłowska, Justyna; Skrobas, Urszula; Grabowska-Aleksandrowicz, Katarzyna; Korchut, Agnieszka; Szklener, Sebastian; Szczęśniak-Stańczyk, Dorota; Tzovaras, Dimitrios; Rejdak, Konrad
2018-01-01
The aim of the present study is to present the results of the assessment of clinical application of the robotic assistant for patients suffering from mild cognitive impairments (MCI) and Alzheimer Disease (AD). The human-robot interaction (HRI) evaluation approach taken within the study is a novelty in the field of social robotics. The proposed assessment of the robotic functionalities are based on end-user perception of attractiveness, usability and potential societal impact of the device. The methods of evaluation applied consist of User Experience Questionnaire (UEQ), AttrakDiff and the societal impact inventory tailored for the project purposes. The prototype version of the Robotic Assistant for MCI patients at Home (RAMCIP) was tested in a semi-controlled environment at the Department of Neurology (Lublin, Poland). Eighteen elderly participants, 10 healthy and 8 MCI, performed everyday tasks and functions facilitated by RAMCIP. The tasks consisted of semi-structuralized scenarios like: medication intake, hazardous events prevention, and social interaction. No differences between the groups of subjects were observed in terms of perceived attractiveness, usability nor-societal impact of the device. The robotic assistant societal impact and attractiveness were highly assessed. The usability of the device was reported as neutral due to the short time of interaction.
Robust Control of a Cable-Driven Soft Exoskeleton Joint for Intrinsic Human-Robot Interaction.
Jarrett, C; McDaid, A J
2017-07-01
A novel, cable-driven soft joint is presented for use in robotic rehabilitation exoskeletons to provide intrinsic, comfortable human-robot interaction. The torque-displacement characteristics of the soft elastomeric core contained within the joint are modeled. This knowledge is used in conjunction with a dynamic system model to derive a sliding mode controller (SMC) to implement low-level torque control of the joint. The SMC controller is experimentally compared with a baseline feedback-linearised proportional-derivative controller across a range of conditions and shown to be robust to un-modeled disturbances. The torque controller is then tested with six healthy subjects while they perform a selection of activities of daily living, which has validated its range of performance. Finally, a case study with a participant with spastic cerebral palsy is presented to illustrate the potential of both the joint and controller to be used in a physiotherapy setting to assist clinical populations.
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.
2013-01-01
Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.
Design and development of biomimetic quadruped robot for behavior studies of rats and mice.
Ishii, Hiroyuki; Masuda, Yuichi; Miyagishima, Syunsuke; Fumino, Shogo; Takanishi, Atsuo; Laschi, Cecilia; Mazzolai, Barbara; Mattoli, Virgilio; Dario, Paolo
2009-01-01
This paper presents the design and development of a novel biomimetic quadruped robot for behavior studies of rats and mice. Many studies have been performed using these animals for the purpose of understanding human mind in psychology, pharmacology and brain science. In these fields, several experiments on social interactions have been performed using rats as basic studies of mental disorders or social learning. However, some researchers mention that the experiments on social interactions using animals are poorly-reproducible. Therefore, we consider that reproducibility of these experiments can be improved by using a robotic agent that interacts with an animal subject. Thus, we developed a small quadruped robot WR-2 (Waseda Rat No. 2) that behaves like a real rat. Proportion and DOF arrangement of WR-2 are designed based on those of a mature rat. This robot has four 3-DOF legs, a 2-DOF waist and a 1-DOF neck. A microcontroller and a wireless communication module are implemented on it. A battery is also implemented. Thus, it can walk, rear by limbs and groom its body.
Human Factors Consideration for the Design of Collaborative Machine Assistants
NASA Astrophysics Data System (ADS)
Park, Sung; Fisk, Arthur D.; Rogers, Wendy A.
Recent improvements in technology have facilitated the use of robots and virtual humans not only in entertainment and engineering but also in the military (Hill et al., 2003), healthcare (Pollack et al., 2002), and education domains (Johnson, Rickel, & Lester, 2000). As active partners of humans, such machine assistants can take the form of a robot or a graphical representation and serve the role of a financial assistant, a health manager, or even a social partner. As a result, interactive technologies are becoming an integral component of people's everyday lives.
Ong, Carmichael F.; Hicks, Jennifer L.; Delp, Scott L.
2017-01-01
Goal Technologies that augment human performance are the focus of intensive research and development, driven by advances in wearable robotic systems. Success has been limited by the challenge of understanding human–robot interaction. To address this challenge, we developed an optimization framework to synthesize a realistic human standing long jump and used the framework to explore how simulated wearable robotic devices might enhance jump performance. Methods A planar, five-segment, seven-degree-of-freedom model with physiological torque actuators, which have variable torque capacity depending on joint position and velocity, was used to represent human musculoskeletal dynamics. An active augmentation device was modeled as a torque actuator that could apply a single pulse of up to 100 Nm of extension torque. A passive design was modeled as rotational springs about each lower limb joint. Dynamic optimization searched for physiological and device actuation patterns to maximize jump distance. Results Optimization of the nominal case yielded a 2.27 m jump that captured salient kinematic and kinetic features of human jumps. When the active device was added to the ankle, knee, or hip, jump distance increased to between 2.49 and 2.52 m. Active augmentation of all three joints increased the jump distance to 3.10 m. The passive design increased jump distance to 3.32 m by adding torques of 135 Nm, 365 Nm, and 297 Nm to the ankle, knee, and hip, respectively. Conclusion Dynamic optimization can be used to simulate a standing long jump and investigate human-robot interaction. Significance Simulation can aid in the design of performance-enhancing technologies. PMID:26258930
Challenges for Service Robots-Requirements of Elderly Adults with Cognitive Impairments.
Korchut, Agnieszka; Szklener, Sebastian; Abdelnour, Carla; Tantinya, Natalia; Hernández-Farigola, Joan; Ribes, Joan Carles; Skrobas, Urszula; Grabowska-Aleksandrowicz, Katarzyna; Szczęśniak-Stańczyk, Dorota; Rejdak, Konrad
2017-01-01
We focused on identifying the requirements and needs of people suffering from Alzheimer disease and early dementia stages with relation to robotic assistants. Based on focus groups performed in two centers (Poland and Spain), we created surveys for medical staff, patients, and caregivers, including: functional requirements; human-robot interaction, the design of the robotic assistant and user acceptance aspects. Using Likert scale and analysis made on the basis of the frequency of survey responses, we identified users' needs as high, medium, and low priority. We gathered 264 completed surveys (100 from medical staff, 81 from caregivers, and 83 from potential users). Most of the respondents, almost at the same level in each of the three groups, accept robotic assistants and their support in everyday life. High level priority functional requirements were related to reacting in emergency situations (calling for help, detecting/removing obstacles) and to reminding about medication intake, about boiling water, turning off the gas and lights (almost 60% of answers). With reference to human-robot interaction, high priority was given to voice operated system and the capability of robotic assistants to reply to simple questions. Our results help in achieving better understanding of the needs of patients with cognitive impairments during home tasks in everyday life. This way of conducting the research, with considerations for the interests of three stakeholder groups in two autonomic centers with proven experience regarding the needs of our patient groups, highlights the importance of obtained results.
Meeting the challenges of installing a mobile robotic system
NASA Technical Reports Server (NTRS)
Decorte, Celeste
1994-01-01
The challenges of integrating a mobile robotic system into an application environment are many. Most problems inherent to installing the mobile robotic system fall into one of three categories: (1) the physical environment - location(s) where, and conditions under which, the mobile robotic system will work; (2) the technological environment - external equipment with which the mobile robotic system will interact; and (3) the human environment - personnel who will operate and interact with the mobile robotic system. The successful integration of a mobile robotic system into these three types of application environment requires more than a good pair of pliers. The tools for this job include: careful planning, accurate measurement data (as-built drawings), complete technical data of systems to be interfaced, sufficient time and attention of key personnel for training on how to operate and program the robot, on-site access during installation, and a thorough understanding and appreciation - by all concerned - of the mobile robotic system's role in the security mission at the site, as well as the machine's capabilities and limitations. Patience, luck, and a sense of humor are also useful tools to keep handy during a mobile robotic system installation. This paper will discuss some specific examples of problems in each of three categories, and explore approaches to solving these problems. The discussion will draw from the author's experience with on-site installations of mobile robotic systems in various applications. Most of the information discussed in this paper has come directly from knowledge learned during installations of Cybermotion's SR2 security robots. A large part of the discussion will apply to any vehicle with a drive system, collision avoidance, and navigation sensors, which is, of course, what makes a vehicle autonomous. And it is with these sensors and a drive system that the installer must become familiar in order to foresee potential trouble areas in the physical, technical, and human environment.
Scalable fabric tactile sensor arrays for soft bodies
NASA Astrophysics Data System (ADS)
Day, Nathan; Penaloza, Jimmy; Santos, Veronica J.; Killpack, Marc D.
2018-06-01
Soft robots have the potential to transform the way robots interact with their environment. This is due to their low inertia and inherent ability to more safely interact with the world without damaging themselves or the people around them. However, existing sensing for soft robots has at least partially limited their ability to control interactions with their environment. Tactile sensors could enable soft robots to sense interaction, but most tactile sensors are made from rigid substrates and are not well suited to applications for soft robots which can deform. In addition, the benefit of being able to cheaply manufacture soft robots may be lost if the tactile sensors that cover them are expensive and their resolution does not scale well for manufacturability. This paper discusses the development of a method to make affordable, high-resolution, tactile sensor arrays (manufactured in rows and columns) that can be used for sensorizing soft robots and other soft bodies. However, the construction results in a sensor array that exhibits significant amounts of cross-talk when two taxels in the same row are compressed. Using the same fabric-based tactile sensor array construction design, two different methods for cross-talk compensation are presented. The first uses a mathematical model to calculate a change in resistance of each taxel directly. The second method introduces additional simple circuit components that enable us to isolate each taxel electrically and relate voltage to force directly. Fabric sensor arrays are demonstrated for two different soft-bodied applications: an inflatable single link robot and a human wrist.
Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)
NASA Technical Reports Server (NTRS)
Baroff, Lynn; Dischinger, Charlie; Fitts, David
2009-01-01
Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.
Cognitive Robotics, Embodied Cognition and Human-Robot Interaction
2010-11-03
architecture is a specification of the structure of the brain at a level of abstraction that explains how it achieves the function of the mind (Anderson...predictions about brain regions (fMRI) Wednesday, November 3, 2010 Embodied Cognitive Modeling • We use an MDS robot (Trafton et al., 2010...passed memory and/or reality control questions (e.g., “Where did Maxi put the chocolate ?” or “Where is the chocolate now?”). Our reasoning was that age
Vu, Dinh-Son; Allard, Ulysse Cote; Gosselin, Clement; Routhier, Francois; Gosselin, Benoit; Campeau-Lecours, Alexandre
2017-07-01
Robotic assistive devices enhance the autonomy of individuals living with physical disabilities in their day-to-day life. Although the first priority for such devices is safety, they must also be intuitive and efficient from an engineering point of view in order to be adopted by a broad range of users. This is especially true for assistive robotic arms, as they are used for the complex control tasks of daily living. One challenge in the control of such assistive robots is the management of the end-effector orientation which is not always intuitive for the human operator, especially for neophytes. This paper presents a novel orientation control algorithm designed for robotic arms in the context of human-robot interaction. This work aims at making the control of the robot's orientation easier and more intuitive for the user, in particular, individuals living with upper limb disabilities. The performance and intuitiveness of the proposed orientation control algorithm is assessed through two experiments with 25 able-bodied subjects and shown to significantly improve on both aspects.
NASA Astrophysics Data System (ADS)
Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.
2017-05-01
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
Improving the transparency of a rehabilitation robot by exploiting the cyclic behaviour of walking.
van Dijk, W; van der Kooij, H; Koopman, B; van Asseldonk, E H F; van der Kooij, H
2013-06-01
To promote active participation of neurological patients during robotic gait training, controllers, such as "assist as needed" or "cooperative control", are suggested. Apart from providing support, these controllers also require that the robot should be capable of resembling natural, unsupported, walking. This means that they should have a transparent mode, where the interaction forces between the human and the robot are minimal. Traditional feedback-control algorithms do not exploit the cyclic nature of walking to improve the transparency of the robot. The purpose of this study was to improve the transparent mode of robotic devices, by developing two controllers that use the rhythmic behavior of gait. Both controllers use adaptive frequency oscillators and kernel-based non-linear filters. Kernelbased non-linear filters can be used to estimate signals and their time derivatives, as a function of the gait phase. The first controller learns the motor angle, associated with a certain joint angle pattern, and acts as a feed-forward controller to improve the torque tracking (including the zero-torque mode). The second controller learns the state of the mechanical system and compensates for the dynamical effects (e.g. the acceleration of robot masses). Both controllers have been tested separately and in combination on a small subject population. Using the feedforward controller resulted in an improved torque tracking of at least 52 percent at the hip joint, and 61 percent at the knee joint. When both controllers were active simultaneously, the interaction power between the robot and the human leg was reduced by at least 40 percent at the thigh, and 43 percent at the shank. These results indicate that: if a robotic task is cyclic, the torque tracking and transparency can be improved by exploiting the predictions of adaptive frequency oscillator and kernel-based nonlinear filters.
NASA Center for Intelligent Robotic Systems for Space Exploration
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE.
Robots with a gentle touch: advances in assistive robotics and prosthetics.
Harwin, W S
1999-01-01
As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.
A concept for ubiquitous robotics in industrial environment
NASA Astrophysics Data System (ADS)
Sallinen, Mikko; Heilala, Juhani; Kivikunnas, Sauli
2007-09-01
In this paper a concept for industrial ubiquitous robotics is presented. The concept combines two different approaches to manage agile, adaptable production: firstly the human operator is strongly in the production loop and secondly, the robot workcell will be more autonomous and smarter to manage production. This kind of autonomous robot cell can be called production island. Communication to the human operator working in this kind of smart industrial environment can be divided into two levels: body area communication and operator-infrastructure communication including devices, machines and infra. Body area communication can be supportive in two directions: data is recorded by means of measuring physical actions, such as hand movements, body gestures or supportive when it will provide information to user such as guides or manuals for operation. Body area communication can be carried out using short range communication technologies such as NFC (Near Field communication) which is RFID type of communication. In the operator-infrastructure communication, WLAN or Bluetooth -communication can be used. Beyond the current Human Machine interaction HMI systems, the presented system concept is designed to fulfill the requirements for hybrid, knowledge intensive manufacturing in the future, where humans and robots operate in close co-operation.
Mendoza, Marco; Bonilla, Isela; González-Galván, Emilio; Reyes, Fernando
2016-01-01
This paper presents an improved wave-based bilateral teleoperation scheme for rehabilitation therapies assisted by robot manipulators. The main feature of this bilateral teleoperator is that both robot manipulators, master and slave, are controlled by impedance. Thus, a pair of motion-based adaptive impedance controllers are integrated into a wave-based configuration, in order to guarantee a stable human-robot interaction and to compensate the position drift, characteristic of the available schemes of bilateral teleoperation. Moreover, the teleoperator stability, in the presence of time delays in the communication channel, is guaranteed because the wave-variable approach is included to encode the force and velocity signals. It should be noted that the proposed structure enables the implementation of several teleoperator schemes, from passive therapies, without the intervention of a human operator on the master side, to fully active therapies where both manipulators interact with humans in a stable manner. The suitable performance of the proposed teleoperator is verified through some results obtained from the simulation of the passive and active-constrained modes, by considering typical tasks in motor-therapy rehabilitation, where an improved behavior is observed when compared to implementations of the classical wave-based approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A secure and easy-to-implement web-based communication framework for caregiving robot teams
NASA Astrophysics Data System (ADS)
Tuna, G.; Daş, R.; Tuna, A.; Örenbaş, H.; Baykara, M.; Gülez, K.
2016-03-01
In recent years, robots have started to become more commonplace in our lives, from factory floors to museums, festivals and shows. They have started to change how we work and play. With an increase in the population of the elderly, they have also been started to be used for caregiving services, and hence many countries have been investing in the robot development. The advancements in robotics and wireless communications has led to the emergence of autonomous caregiving robot teams which cooperate to accomplish a set of tasks assigned by human operators. Although wireless communications and devices are flexible and convenient, they are vulnerable to many risks compared to traditional wired networks. Since robots with wireless communication capability transmit all data types, including sensory, coordination, and control, through radio frequencies, they are open to intruders and attackers unless protected and their openness may lead to many security issues such as data theft, passive listening, and service interruption. In this paper, a secure web-based communication framework is proposed to address potential security threats due to wireless communication in robot-robot and human-robot interaction. The proposed framework is simple and practical, and can be used by caregiving robot teams in the exchange of sensory data as well as coordination and control data.
Hiolle, Antoine; Lewis, Matthew; Cañamero, Lola
2014-01-01
In the context of our work in developmental robotics regarding robot-human caregiver interactions, in this paper we investigate how a "baby" robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a "caregiver" to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two "idealized" robot profiles-a "needy" and an "independent" robot-in terms of their use of a caregiver as a means to regulate the "stress" (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their responsiveness-"responsive" and "non-responsive"-to the regulatory requests of the robot. Going beyond previous work, in this paper we (a) assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (b) bring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to adapt its regulatory behavior along the "needy" and "independent" axis as a function of the varying responsiveness of the caregiver; and (c) analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot.
Socialization between toddlers and robots at an early childhood education center
Tanaka, Fumihide; Cicourel, Aaron; Movellan, Javier R.
2007-01-01
A state-of-the-art social robot was immersed in a classroom of toddlers for >5 months. The quality of the interaction between children and robots improved steadily for 27 sessions, quickly deteriorated for 15 sessions when the robot was reprogrammed to behave in a predictable manner, and improved in the last three sessions when the robot displayed again its full behavioral repertoire. Initially, the children treated the robot very differently than the way they treated each other. By the last sessions, 5 months later, they treated the robot as a peer rather than as a toy. Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time and that it could have great potential in educational settings assisting teachers and enriching the classroom environment. PMID:17984068
Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.
2017-01-01
Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736
Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L
2017-01-01
Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.
Multimodal emotional state recognition using sequence-dependent deep hierarchical features.
Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan
2015-12-01
Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Compendium of Abstracts. Volume 2
2010-08-01
researched for various applications such as self - healing and fluid transport. One method of creating these vascular systems is through a process called...Daniel J. Dexterous robotic manipulators that rely on joystick type interfaces for teleoperation require considerable time and effort to master...and lack an intuitive basis for human- robot interaction. This hampers operator performance, increases cognitive workload, and limits overall
Lippi, Vittorio; Mergner, Thomas
2017-01-01
The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking.
Lippi, Vittorio; Mergner, Thomas
2017-01-01
The high complexity of the human posture and movement control system represents challenges for diagnosis, therapy, and rehabilitation of neurological patients. We envisage that engineering-inspired, model-based approaches will help to deal with the high complexity of the human posture control system. Since the methods of system identification and parameter estimation are limited to systems with only a few DoF, our laboratory proposes a heuristic approach that step-by-step increases complexity when creating a hypothetical human-derived control systems in humanoid robots. This system is then compared with the human control in the same test bed, a posture control laboratory. The human-derived control builds upon the identified disturbance estimation and compensation (DEC) mechanism, whose main principle is to support execution of commanded poses or movements by compensating for external or self-produced disturbances such as gravity effects. In previous robotic implementation, up to 3 interconnected DEC control modules were used in modular control architectures separately for the sagittal plane or the frontal body plane and successfully passed balancing and movement tests. In this study we hypothesized that conflict-free movement coordination between the robot's sagittal and frontal body planes emerges simply from the physical embodiment, not necessarily requiring a full body control. Experiments were performed in the 14 DoF robot Lucy Posturob (i) demonstrating that the mechanical coupling from the robot's body suffices to coordinate the controls in the two planes when the robot produces movements and balancing responses in the intermediate plane, (ii) providing quantitative characterization of the interaction dynamics between body planes including frequency response functions (FRFs), as they are used in human postural control analysis, and (iii) witnessing postural and control stability when all DoFs are challenged together with the emergence of inter-segmental coordination in squatting movements. These findings represent an important step toward controlling in the robot in future more complex sensorimotor functions such as walking. PMID:28951719
Biomedical applications of soft robotics
NASA Astrophysics Data System (ADS)
Cianchetti, Matteo; Laschi, Cecilia; Menciassi, Arianna; Dario, Paolo
2018-06-01
Soft robotics enables the design of soft machines and devices at different scales. The compliance and mechanical properties of soft robots make them especially interesting for medical applications. Depending on the level of interaction with humans, different levels of biocompatibility and biomimicry are required for soft materials used in robots. In this Review, we investigate soft robots for biomedical applications, including soft tools for surgery, diagnosis and drug delivery, wearable and assistive devices, prostheses, artificial organs and tissue-mimicking active simulators for training and biomechanical studies. We highlight challenges regarding durability and reliability, and examine traditional and novel soft and active materials as well as different actuation strategies. Finally, we discuss future approaches and applications in the field.
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
A Control Framework for Anthropomorphic Biped Walking Based on Stabilizing Feedforward Trajectories.
Rezazadeh, Siavash; Gregg, Robert D
2016-10-01
Although dynamic walking methods have had notable successes in control of bipedal robots in the recent years, still most of the humanoid robots rely on quasi-static Zero Moment Point controllers. This work is an attempt to design a highly stable controller for dynamic walking of a human-like model which can be used both for control of humanoid robots and prosthetic legs. The method is based on using time-based trajectories that can induce a highly stable limit cycle to the bipedal robot. The time-based nature of the controller motivates its use to entrain a model of an amputee walking, which can potentially lead to a better coordination of the interaction between the prosthesis and the human. The simulations demonstrate the stability of the controller and its robustness against external perturbations.
Takano, Wataru; Kusajima, Ikuo; Nakamura, Yoshihiko
2016-08-01
It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Human Exploration on the Moon, Mars and NEOs: PEX.2/ICEUM12B
NASA Astrophysics Data System (ADS)
Foing, Bernard H.
2016-07-01
The session COSPAR-16-PEX.2: "Human Exploration on the Moon, Mars and NEOs", co-sponsored by Commissions B, F will include solicited and contributed talks and poster/interactive presentations. It will also be part of the 12th International Conference on Exploration and Utilisation of the Moon ICEUM12B from the ILEWG ICEUM series started in 1994. It will address various themes and COSPAR communities: - Sciences (of, on, from) the Moon enabled by humans - Research from cislunar and libration points - From robotic villages to international lunar bases - Research from Mars & NEOs outposts - Humans to Phobos/Deimos, Mars and NEOS - Challenges and preparatory technologies, field research operations - Human and robotic partnerships and precursor missions - Resource utilisation, life support and sustainable exploration - Stakeholders for human exploration One half-day session will be dedicated to a workshop format and meetings/reports of task groups: Science, Technology, Agencies, Robotic village, Human bases, Society & Commerce, Outreach, Young Explorers. COSPAR has provided through Commissions, Panels and Working Groups (such as ILEWG, IMEWG) an international forum for supporting and promoting the robotic and human exploration of the Moon, Mars and NEOS. Proposed sponsors : ILEWG, ISECG, IKI, ESA, NASA, DLR, CNES, ASI, UKSA, JAXA, ISRO, SRON, CNSA, SSERVI, IAF, IAA, Lockheed Martin, Google Lunar X prize, UNOOSA
Self-evaluation on Motion Adaptation for Service Robots
NASA Astrophysics Data System (ADS)
Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru
We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.
Scano, A; Chiavenna, A; Caimmi, M; Malosio, M; Tosatti, L M; Molteni, F
2017-07-01
Robot-assisted training is a widely used technique to promote motor re-learning on post-stroke patients that suffer from motor impairment. While it is commonly accepted that robot-based therapies are potentially helpful, strong insights about their efficacy are still lacking. The motor re-learning process may act on muscular synergies, which are groups of co-activating muscles that, being controlled as a synergic group, allow simplifying the problem of motor control. In fact, by coordinating a reduced amount of neural signals, complex motor patterns can be elicited. This paper aims at analyzing the effects of robot assistance during 3D-reaching movements in the framework of muscular synergies. 5 healthy people and 3 neurological patients performed free and robot-assisted reaching movements at 2 different speeds (slow and quasi-physiological). EMG recordings were used to extract muscular synergies. Results indicate that the interaction with the robot very slightly alters healthy people patterns but, on the contrary, it may promote the emergency of physiological-like synergies on neurological patients.
How does a surgeon’s brain buzz? An EEG coherence study on the interaction between humans and robot
2013-01-01
Introduction In humans, both primary and non-primary motor areas are involved in the control of voluntary movements. However, the dynamics of functional coupling among different motor areas have not been fully clarified yet. There is to date no research looking to the functional dynamics in the brain of surgeons working in laparoscopy compared with those trained and working in robotic surgery. Experimental procedures We enrolled 16 right-handed trained surgeons and assessed changes in intra- and inter-hemispheric EEG coherence with a 32-channels device during the same motor task with either a robotic or a laparoscopic approach. Estimates of auto and coherence spectra were calculated by a fast Fourier transform algorithm implemented on Matlab 5.3. Results We found increase of coherence in surgeons performing laparoscopy, especially in theta and lower alpha activity, in all experimental conditions (M1 vs. SMA, S1 vs. SMA, S1 vs. pre-SMA and M1 vs. S1; p < 0.001). Conversely, an increase in inter-hemispheric coherence in upper alpha and beta band was found in surgeons using the robotic procedure (right vs. left M1, right vs. left S1, right pre-SMA vs. left M1, left pre-SMA vs. right M1; p < 0.001). Discussion Our data provide a semi-quantitative evaluation of dynamics in functional coupling among different cortical areas in skilled surgeons performing laparoscopy or robotic surgery. These results suggest that motor and non-motor areas are differently activated and coordinated in surgeons performing the same task with different approaches. To the best of our knowledge, this is the first study that tried to assess semi-quantitative differences during the interaction between normal human brain and robotic devices. PMID:23607324
How does a surgeon's brain buzz? An EEG coherence study on the interaction between humans and robot.
Bocci, Tommaso; Moretto, Carlo; Tognazzi, Silvia; Briscese, Lucia; Naraci, Megi; Leocani, Letizia; Mosca, Franco; Ferrari, Mauro; Sartucci, Ferdinando
2013-04-22
In humans, both primary and non-primary motor areas are involved in the control of voluntary movements. However, the dynamics of functional coupling among different motor areas have not been fully clarified yet. There is to date no research looking to the functional dynamics in the brain of surgeons working in laparoscopy compared with those trained and working in robotic surgery. We enrolled 16 right-handed trained surgeons and assessed changes in intra- and inter-hemispheric EEG coherence with a 32-channels device during the same motor task with either a robotic or a laparoscopic approach. Estimates of auto and coherence spectra were calculated by a fast Fourier transform algorithm implemented on Matlab 5.3. We found increase of coherence in surgeons performing laparoscopy, especially in theta and lower alpha activity, in all experimental conditions (M1 vs. SMA, S1 vs. SMA, S1 vs. pre-SMA and M1 vs. S1; p < 0.001). Conversely, an increase in inter-hemispheric coherence in upper alpha and beta band was found in surgeons using the robotic procedure (right vs. left M1, right vs. left S1, right pre-SMA vs. left M1, left pre-SMA vs. right M1; p < 0.001). Our data provide a semi-quantitative evaluation of dynamics in functional coupling among different cortical areas in skilled surgeons performing laparoscopy or robotic surgery. These results suggest that motor and non-motor areas are differently activated and coordinated in surgeons performing the same task with different approaches. To the best of our knowledge, this is the first study that tried to assess semi-quantitative differences during the interaction between normal human brain and robotic devices.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
NASA Astrophysics Data System (ADS)
Billard, Aude
2000-10-01
This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.
Robotics and medicine: A scientific rainbow in hospital.
Jeelani, S; Dany, A; Anand, B; Vandana, S; Maheswaran, T; Rajkumar, E
2015-08-01
The journey of robotics is a real wonder and astonishingly can be considered as a scientific rainbow showering surprising priceless power in the era of future technologies. The astonishing seven technologies discussed in this paper are da Vinci Robotic surgical system and sperm sorters for infertility, Veebot for blood investigation, Hanako the robotic dental patient for simulating the dental patient and helping a trainee dentist, RP-7 robot who is around-the-clock physician connecting the physician and patient, Robot for Interactive Body Assistance (RIBA) who is a RIBA serving as a nurse, Bushbot serving as a brilliant surgeon, and Virtibot helping in virtual autopsy. Thus, robotics in medicine is a budding field contributing a great lot to human life from before birth to afterlife in seven forms thus gracefully portraying a scientific rainbow in hospital environment.
Robotics and medicine: A scientific rainbow in hospital
Jeelani, S.; Dany, A.; Anand, B.; Vandana, S.; Maheswaran, T.; Rajkumar, E.
2015-01-01
The journey of robotics is a real wonder and astonishingly can be considered as a scientific rainbow showering surprising priceless power in the era of future technologies. The astonishing seven technologies discussed in this paper are da Vinci Robotic surgical system and sperm sorters for infertility, Veebot for blood investigation, Hanako the robotic dental patient for simulating the dental patient and helping a trainee dentist, RP-7 robot who is around-the-clock physician connecting the physician and patient, Robot for Interactive Body Assistance (RIBA) who is a RIBA serving as a nurse, Bushbot serving as a brilliant surgeon, and Virtibot helping in virtual autopsy. Thus, robotics in medicine is a budding field contributing a great lot to human life from before birth to afterlife in seven forms thus gracefully portraying a scientific rainbow in hospital environment. PMID:26538882
SOFT ROBOTICS. A 3D-printed, functionally graded soft robot powered by combustion.
Bartlett, Nicholas W; Tolley, Michael T; Overvelde, Johannes T B; Weaver, James C; Mosadegh, Bobak; Bertoldi, Katia; Whitesides, George M; Wood, Robert J
2015-07-10
Roboticists have begun to design biologically inspired robots with soft or partially soft bodies, which have the potential to be more robust and adaptable, and safer for human interaction, than traditional rigid robots. However, key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components. We used multimaterial three-dimensional (3D) printing to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior. This stiffness gradient, spanning three orders of magnitude in modulus, enables reliable interfacing between rigid driving components (controller, battery, etc.) and the primarily soft body, and also enhances performance. Powered by the combustion of butane and oxygen, this robot is able to perform untethered jumping. Copyright © 2015, American Association for the Advancement of Science.
Dynamic inverse models in human-cyber-physical systems
NASA Astrophysics Data System (ADS)
Robinson, Ryan M.; Scobee, Dexter R. R.; Burden, Samuel A.; Sastry, S. Shankar
2016-05-01
Human interaction with the physical world is increasingly mediated by automation. This interaction is characterized by dynamic coupling between robotic (i.e. cyber) and neuromechanical (i.e. human) decision-making agents. Guaranteeing performance of such human-cyber-physical systems will require predictive mathematical models of this dynamic coupling. Toward this end, we propose a rapprochement between robotics and neuromechanics premised on the existence of internal forward and inverse models in the human agent. We hypothesize that, in tele-robotic applications of interest, a human operator learns to invert automation dynamics, directly translating from desired task to required control input. By formulating the model inversion problem in the context of a tracking task for a nonlinear control system in control-a_ne form, we derive criteria for exponential tracking and show that the resulting dynamic inverse model generally renders a portion of the physical system state (i.e., the internal dynamics) unobservable from the human operator's perspective. Under stability conditions, we show that the human can achieve exponential tracking without formulating an estimate of the system's state so long as they possess an accurate model of the system's dynamics. These theoretical results are illustrated using a planar quadrotor example. We then demonstrate that the automation can intervene to improve performance of the tracking task by solving an optimal control problem. Performance is guaranteed to improve under the assumption that the human learns and inverts the dynamic model of the altered system. We conclude with a discussion of practical limitations that may hinder exact dynamic model inversion.
Concept and design philosophy of a person-accompanying robot
NASA Astrophysics Data System (ADS)
Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi
1999-01-01
This paper proposes a person accompanying robot as a novel human collaborative robot. The person accompanying robot is such legged mobile robot that is possible to follow the person utilizing its vision. towards future aging society, human collaboration and human support are required as novel applications of robots. Such human collaborative robots share the same space with humans. But conventional robots are isolated from humans and lack the capability to observe humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. To collaborate and support humans properly human collaborative robot must have capability to observe and recognize humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. The authors are currently implementing a prototype of the proposed accompanying robot.As a base for the human observing function of the prototype robot, we have realized face tracking utilizing skin color extraction and correlation based tracking. We also develop a method for the robot to pick up human voice clearly and remotely by utilizing microphone arrays. Results of these preliminary study suggest feasibility of the proposed robot.
NASA Astrophysics Data System (ADS)
Lennon, Craig; Bodt, Barry; Childers, Marshal; Dean, Robert; Oh, Jean; DiBerardino, Chip; Keegan, Terence
2015-05-01
The Army Research Laboratory's Robotics Collaborative Technology Alliance (RCTA) is a program intended to change robots from tools that soldiers use into teammates with which soldiers can work. This requires the integration of fundamental and applied research in perception, artificial intelligence, and human-robot interaction. In October of 2014, the RCTA assessed progress towards integrating this research. This assessment was designed to evaluate the robot's performance when it used new capabilities to perform selected aspects of a mission. The assessed capabilities included the ability of the robot to: navigate semantically outdoors with respect to structures and landmarks, identify doors in the facades of buildings, and identify and track persons emerging from those doors. We present details of the mission-based vignettes that constituted the assessment, and evaluations of the robot's performance in these vignettes.
Mobile robot navigation modulated by artificial emotions.
Lee-Johnson, C P; Carnegie, D A
2010-04-01
For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.
Three dialogues concerning robots in elder care.
Metzler, Theodore A; Barnes, Susan J
2014-01-01
The three dialogues in this contribution concern 21st century application of life-like robots in the care of older adults. They depict conversations set in the near future, involving a philosopher (Dr Phonius) and a nurse (Dr Myloss) who manages care at a large facility for assisted living. In their first dialogue, the speakers discover that their quite different attitudes towards human-robot interaction parallel fundamental differences separating their respective concepts of consciousness. The second dialogue similarly uncovers deeply contrasting notions of personhood that appear to be associated with respective communities of nursing and robotics. The additional key awareness that arises in their final dialogue links applications of life-like robots in the care of older adults with potential transformations in our understandings of ourselves - indeed, in our understandings of the nature of our own humanity. This series of dialogues, therefore, appears to address a topic in nursing philosophy that merits our careful attention. © 2013 John Wiley & Sons Ltd.
Relative hardness measurement of soft objects by a new fiber optic sensor
NASA Astrophysics Data System (ADS)
Ahmadi, Roozbeh; Ashtaputre, Pranav; Abou Ziki, Jana; Dargahi, Javad; Packirisamy, Muthukumaran
2010-06-01
The measurement of relative hardness of soft objects enables replication of human finger tactile perception capabilities. This ability has many applications not only in automation and robotics industry but also in many other areas such as aerospace and robotic surgery where a robotic tool interacts with a soft contact object. One of the practical examples of interaction between a solid robotic instrument and a soft contact object occurs during robotically-assisted minimally invasive surgery. Measuring the relative hardness of bio-tissue, while contacting the robotic instrument, helps the surgeons to perform this type of surgery more reliably. In the present work, a new optical sensor is proposed to measure the relative hardness of contact objects. In order to measure the hardness of a contact object, like a human finger, it is required to apply a small force/deformation to the object by a tactile sensor. Then, the applied force and resulting deformation should be recorded at certain points to enable the relative hardness measurement. In this work, force/deformation data for a contact object is recorded at certain points by the proposed optical sensor. Recorded data is used to measure the relative hardness of soft objects. Based on the proposed design, an experimental setup was developed and experimental tests were performed to measure the relative hardness of elastomeric materials. Experimental results verify the ability of the proposed optical sensor to measure the relative hardness of elastomeric samples.
How long did it last? You would better ask a human
Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694
How long did it last? You would better ask a human.
Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
Progress in EEG-Based Brain Robot Interaction Systems
Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe
2017-01-01
The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488
2015 Marine Corps Security Environment Forecast: Futures 2030-2045
2015-01-01
The technologies that make the iPhone “smart” were publically funded—the Internet, wireless networks, the global positioning system, microelectronics...Energy Revolution (63 percent); Internet of Things (ubiquitous sensors embedded in interconnected computing devices) (50 percent); “Sci-Fi...Neuroscience & artificial intelligence - Sensors /control systems -Power & energy -Human-robot interaction Robots/autonomous systems will become part of the
Gui, Kai; Liu, Honghai; Zhang, Dingguo
2017-11-01
Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.
Alonso-Martín, Fernando; Gamboa-Montero, Juan José; Castillo, José Carlos; Castro-González, Álvaro; Salichs, Miguel Ángel
2017-05-16
An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke , tap , slap , and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F -score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.
Face Generation Using Emotional Regions for Sensibility Robot
NASA Astrophysics Data System (ADS)
Gotoh, Minori; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori
We think that psychological interaction is necessary for smooth communication between robots and people. One way to psychologically interact with others is through facial expressions. Facial expressions are very important for communication because they show true emotions and feelings. The ``Ifbot'' robot communicates with people by considering its own ``emotions''. Ifbot has many facial expressions to communicate enjoyment. We developed a method for generating facial expressions based on human subjective judgements mapping Ifbot's facial expressions to its emotions. We first created Ifbot's emotional space to map its facial expressions. We applied a five-layer auto-associative neural network to the space. We then subjectively evaluated the emotional space and created emotional regions based on the results. We generated emotive facial expressions using the emotional regions.
Real-time human-robot interaction underlying neurorobotic trust and intent recognition.
Bray, Laurence C Jayet; Anumandla, Sridhar R; Thibeault, Corey M; Hoang, Roger V; Goodman, Philip H; Dascalu, Sergiu M; Bryant, Bobby D; Harris, Frederick C
2012-08-01
In the past three decades, the interest in trust has grown significantly due to its important role in our modern society. Everyday social experience involves "confidence" among people, which can be interpreted at the neurological level of a human brain. Recent studies suggest that oxytocin is a centrally-acting neurotransmitter important in the development and alteration of trust. Its administration in humans seems to increase trust and reduce fear, in part by directly inhibiting the amygdala. However, the cerebral microcircuitry underlying this mechanism is still unknown. We propose the first biologically realistic model for trust, simulating spiking neurons in the cortex in a real-time human-robot interaction simulation. At the physiological level, oxytocin cells were modeled with triple apical dendrites characteristic of their structure in the paraventricular nucleus of the hypothalamus. As trust was established in the simulation, this architecture had a direct inhibitory effect on the amygdala tonic firing, which resulted in a willingness to exchange an object from the trustor (virtual neurorobot) to the trustee (human actor). Our software and hardware enhancements allowed the simulation of almost 100,000 neurons in real time and the incorporation of a sophisticated Gabor mechanism as a visual filter. Our brain was functional and our robotic system was robust in that it trusted or distrusted a human actor based on movement imitation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A
2016-05-19
The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.
Robust mobility in human-populated environments
NASA Astrophysics Data System (ADS)
Gonzalez, Juan Pablo; Phillips, Mike; Neuman, Brad; Likhachev, Max
2012-06-01
Creating robots that can help humans in a variety of tasks requires robust mobility and the ability to safely navigate among moving obstacles. This paper presents an overview of recent research in the Robotics Collaborative Technology Alliance (RCTA) that addresses many of the core requirements for robust mobility in human-populated environments. Safe Interval Path Planning (SIPP) allows for very fast planning in dynamic environments when planning timeminimal trajectories. Generalized Safe Interval Path Planning extends this concept to trajectories that minimize arbitrary cost functions. Finally, generalized PPCP algorithm is used to generate plans that reason about the uncertainty in the predicted trajectories of moving obstacles and try to actively disambiguate the intentions of humans whenever necessary. We show how these approaches consider moving obstacles and temporal constraints and produce high-fidelity paths. Experiments in simulated environments show the performance of the algorithms under different controlled conditions, and experiments on physical mobile robots interacting with humans show how the algorithms perform under the uncertainties of the real world.
Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)
NASA Technical Reports Server (NTRS)
Savely, Robert T. (Editor)
1991-01-01
The papers from the symposium are presented. Emphasis is placed on human factors engineering and space environment interactions. The technical areas covered in the human factors section include: satellite monitoring and control, man-computer interfaces, expert systems, AI/robotics interfaces, crew system dynamics, and display devices. The space environment interactions section presents the following topics: space plasma interaction, spacecraft contamination, space debris, and atomic oxygen interaction with materials. Some of the above topics are discussed in relation to the space station and space shuttle.
Anticipatory detection of turning in humans for intuitive control of robotic mobility assistance.
Farkhatdinov, Ildar; Roehri, Nicolas; Burdet, Etienne
2017-09-26
Many wearable lower-limb robots for walking assistance have been developed in recent years. However, it remains unclear how they can be commanded in an intuitive and efficient way by their user. In particular, providing robotic assistance to neurologically impaired individuals in turning remains a significant challenge. The control should be safe to the users and their environment, yet yield sufficient performance and enable natural human-machine interaction. Here, we propose using the head and trunk anticipatory behaviour in order to detect the intention to turn in a natural, non-intrusive way, and use it for triggering turning movement in a robot for walking assistance. We therefore study head and trunk orientation during locomotion of healthy adults, and investigate upper body anticipatory behaviour during turning. The collected walking and turning kinematics data are clustered using the k-means algorithm and cross-validation tests and k-nearest neighbours method are used to evaluate the performance of turning detection during locomotion. Tests with seven subjects exhibited accurate turning detection. Head anticipated turning by more than 400-500 ms in average across all subjects. Overall, the proposed method detected turning 300 ms after its initiation and 1230 ms before the turning movement was completed. Using head anticipatory behaviour enabled to detect turning faster by about 100 ms, compared to turning detection using only pelvis orientation measurements. Finally, it was demonstrated that the proposed turning detection can improve the quality of human-robot interaction by improving the control accuracy and transparency.
Energetic Passivity of the Human Ankle Joint.
Lee, Hyunglae; Hogan, Neville
2016-12-01
Understanding the passive or nonpassive behavior of the neuromuscular system is important to design and control robots that physically interact with humans, since it provides quantitative information to secure coupled stability while maximizing performance. This has become more important than ever apace with the increasing demand for robotic technologies in neurorehabilitation. This paper presents a quantitative characterization of passive and nonpassive behavior of the ankle of young healthy subjects, which provides a baseline for future studies in persons with neurological impairments and information for future developments of rehabilitation robots, such as exoskeletal devices and powered prostheses. Measurements using a wearable ankle robot actuating 2 degrees-of-freedom of the ankle combined with curl analysis and passivity analysis enabled characterization of both quasi-static and steady-state dynamic behavior of the ankle, unavailable from single DOF studies. Despite active neuromuscular control over a wide range of muscle activation, in young healthy subjects passive or dissipative ankle behavior predominated.
2006-01-01
segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive...multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
The design of mobile robot control system for the aged and the disabled
NASA Astrophysics Data System (ADS)
Qiang, Wang; Lei, Shi; Xiang, Gao; Jin, Zhang
2017-01-01
This paper designs a control system of mobile robot for the aged and the disabled, which consists of two main parts: human-computer interaction and drive control module. The data of the two parts is transferred via universal asynchronous receiver/transmitter. In the former part, the speed and direction information of the mobile robot is obtained by hall joystick. In the latter part, the electronic differential algorithm is developed to implement the robot mobile function by driving two-wheel motors. In order to improve the comfort of the robot when speed or direction is changed, the least squares algorithm is used to optimize the speed characteristic curves of the two motors. Experimental results have verified the effectiveness of the designed system.
Robonaut: A Robotic Astronaut Assistant
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.; Diftler, Myron A.
2001-01-01
NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.
Collaborative Robots and Knowledge Management - A Short Review
NASA Astrophysics Data System (ADS)
Mușat, Flaviu-Constantin; Mihu, Florin-Constantin
2017-12-01
Because the requirements of the customers are more and more high related to quality, quantity, delivery times at lowest costs possible, the industry had to come with automated solutions to improve these requirements. Starting from the automated lines developed by Ford and Toyota, we have now developed automated and self-sustained working lines, which is possible nowadays-using collaborative robots. By using the knowledge management system we can improve the development of the future of this kind of area of research. This paper shows the benefits and the smartness use of the robots that are performing the manipulation activities that increases the work place ergonomically and improve the interaction between human - machine in order to assist in parallel tasks and lowering the physically human efforts.
Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M
2017-06-01
The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.
Datteri, Edoardo
2013-03-01
This article addresses prospective and retrospective responsibility issues connected with medical robotics. It will be suggested that extant conceptual and legal frameworks are sufficient to address and properly settle most retrospective responsibility problems arising in connection with injuries caused by robot behaviours (which will be exemplified here by reference to harms occurred in surgical interventions supported by the Da Vinci robot, reported in the scientific literature and in the press). In addition, it will be pointed out that many prospective responsibility issues connected with medical robotics are nothing but well-known robotics engineering problems in disguise, which are routinely addressed by roboticists as part of their research and development activities: for this reason they do not raise particularly novel ethical issues. In contrast with this, it will be pointed out that novel and challenging prospective responsibility issues may emerge in connection with harmful events caused by normal robot behaviours. This point will be illustrated here in connection with the rehabilitation robot Lokomat.
Multi-layer robot skin with embedded sensors and muscles
NASA Astrophysics Data System (ADS)
Tomar, Ankit; Tadesse, Yonas
2016-04-01
Soft artificial skin with embedded sensors and actuators is proposed for a crosscutting study of cognitive science on a facial expressive humanoid platform. This paper focuses on artificial muscles suitable for humanoid robots and prosthetic devices for safe human-robot interactions. Novel composite artificial skin consisting of sensors and twisted polymer actuators is proposed. The artificial skin is conformable to intricate geometries and includes protective layers, sensor layers, and actuation layers. Fluidic channels are included in the elastomeric skin to inject fluids in order to control actuator response time. The skin can be used to develop facially expressive humanoid robots or other soft robots. The humanoid robot can be used by computer scientists and other behavioral science personnel to test various algorithms, and to understand and develop more perfect humanoid robots with facial expression capability. The small-scale humanoid robots can also assist ongoing therapeutic treatment research with autistic children. The multilayer skin can be used for many soft robots enabling them to detect both temperature and pressure, while actuating the entire structure.
NASA Astrophysics Data System (ADS)
Handford, Matthew L.; Srinivasan, Manoj
2016-02-01
Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.
Instrumented Compliant Wrist with Proximity and Contact Sensing for Close Robot Interaction Control.
Laferrière, Pascal; Payeur, Pierre
2017-06-14
Compliance has been exploited in various forms in robotic systems to allow rigid mechanisms to come into contact with fragile objects, or with complex shapes that cannot be accurately modeled. Force feedback control has been the classical approach for providing compliance in robotic systems. However, by integrating other forms of instrumentation with compliance into a single device, it is possible to extend close monitoring of nearby objects before and after contact occurs. As a result, safer and smoother robot control can be achieved both while approaching and while touching surfaces. This paper presents the design and extensive experimental evaluation of a versatile, lightweight, and low-cost instrumented compliant wrist mechanism which can be mounted on any rigid robotic manipulator in order to introduce a layer of compliance while providing the controller with extra sensing signals during close interaction with an object's surface. Arrays of embedded range sensors provide real-time measurements on the position and orientation of surfaces, either located in proximity or in contact with the robot's end-effector, which permits close guidance of its operation. Calibration procedures are formulated to overcome inter-sensor variability and achieve the highest available resolution. A versatile solution is created by embedding all signal processing, while wireless transmission connects the device to any industrial robot's controller to support path control. Experimental work demonstrates the device's physical compliance as well as the stability and accuracy of the device outputs. Primary applications of the proposed instrumented compliant wrist include smooth surface following in manufacturing, inspection, and safe human-robot interaction.
Experiences with a Barista Robot, FusionBot
NASA Astrophysics Data System (ADS)
Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou
In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.
Manipulability, force, and compliance analysis for planar continuum manipulators
NASA Technical Reports Server (NTRS)
Gravagne, Ian A.; Walker, Ian D.
2002-01-01
Continuum manipulators, inspired by the natural capabilities of elephant trunks and octopus tentacles, may find niche applications in areas like human-robot interaction, multiarm manipulation, and unknown environment exploration. However, their true capabilities will remain largely inaccessible without proper analytical tools to evaluate their unique properties. Ellipsoids have long served as one of the foremost analytical tools available to the robotics researcher, and the purpose of this paper is to first formulate, and then to examine, three types of ellipsoids for continuum robots: manipulability, force, and compliance.
Manipulability, force, and compliance analysis for planar continuum manipulators.
Gravagne, Ian A; Walker, Ian D
2002-06-01
Continuum manipulators, inspired by the natural capabilities of elephant trunks and octopus tentacles, may find niche applications in areas like human-robot interaction, multiarm manipulation, and unknown environment exploration. However, their true capabilities will remain largely inaccessible without proper analytical tools to evaluate their unique properties. Ellipsoids have long served as one of the foremost analytical tools available to the robotics researcher, and the purpose of this paper is to first formulate, and then to examine, three types of ellipsoids for continuum robots: manipulability, force, and compliance.
Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study
Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna
2018-01-01
Background Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. Objective The aim of this study was to explore participants’ qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. Methods NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot’s head sensor. Evaluations were content-analyzed utilizing Boyatzis’ steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Results Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Conclusions Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. PMID:29724701
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
Motion and Emotional Behavior Design for Pet Robot Dog
NASA Astrophysics Data System (ADS)
Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang
A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.
Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration
NASA Technical Reports Server (NTRS)
Zumbado, Jennifer Rochlis; Billman, Dorrit; Feary, Mike; Green, Collin
2011-01-01
The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and operating environments of both the human and the robotic agent, then effective human-robot coordination cannot be achieved.
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of people interacting with each other, not only human-machine interaction. The primary concern is not how people can interact with computers, but how shall we design computers to help people work together? An analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse.
Cortellessa, Gabriella; Fracasso, Francesca; Sorrentino, Alessandra; Orlandini, Andrea; Bernardi, Giulio; Coraci, Luca; De Benedictis, Riccardo; Cesta, Amedeo
2018-02-01
This article describes an enhanced telepresence robot named ROBIN, part of a telecare system derived from the GIRAFFPLUS project for supporting and monitoring older adults at home. ROBIN is integrated in a sensor-rich environment that aims to continuously monitor physical and psychological wellbeing of older persons living alone. The caregivers (formal/informal) can communicate through it with their assisted persons. Long-term trials in real houses highlighted several user requirements that inspired improvements on the robotic platform. The enhanced telepresence robot was assessed by users to test its suitability to support social interaction and provide motivational feedback on health-related aspects. Twenty-five users (n = 25) assessed the new multimodal interaction capabilities and new communication services. A psychophysiological approach was adopted to investigate aspects like engagement, usability, and affective impact, as well as the possible role of individual differences on the quality of human-robot interaction. ROBIN was overall judged usable, the interaction with/through it resulted pleasant and the required workload was limited, thus supporting the idea of using it as a central component for remote assistance and social participation. Open-minded users tended to have a more positive interaction with it. This work describes an enabling technology for remote assistance and social communication. It highlights the importance of being compliant with users' needs to develop solutions easy to use and able to foster their social connections. The role of personality appeared to be relevant for the interaction, underscoring a clear role of the service personalization.
Wu, Ya-Huei; Wrobel, Jérémy; Cornuet, Mélanie; Kerhervé, Hélène; Damnée, Souad; Rigaud, Anne-Sophie
2014-01-01
There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot. Six older adults with mild cognitive impairment (MCI) and five cognitively intact healthy (CIH) older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used. Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults' uneasiness with technology, feeling of stigmatization, and ethical/societal issues associated with robot use. It is important to destigmatize images of assistive robots to facilitate their acceptance. Universal design aiming to increase the market for and production of products that are usable by everyone (to the greatest extent possible) might help to destigmatize assistive devices.
NASA Astrophysics Data System (ADS)
Colla, Valentina; Schroeder, Antonius; Buzzelli, Andrea; Abbà, Dario; Faes, Andrea; Romaniello, Lea
2018-05-01
The introduction of new technologies, which can support and empower human capabilities in a number of professional tasks while possibly reducing the need for cumbersome operations and the exposure to risk and professional diseases, is nowadays perceived as a must in any industrial field, process industry included. However, despite their relevant potentials, new technologies are not always easy to introduce in the professional environment. A design procedure which takes into account the workers' acceptance, needing and capabilities as well as a continuing education and training process of the personnel who must exploit the innovation, is as fundamental as the technical reliability for the successful introduction of any new technology in a professional environment. An exemplary case is provided by symbiotic human-robot-cooperation. In the steel sector, the difficulties for the implementation of symbiotic human-robot-cooperation is bigger with respect to the manufacturing sector, due to the environmental conditions, which in some cases are not favorable to robots. On the other hand, the opportunities and potential advantages are also greater, as robots could replace human operators in repetitive, heavy tasks, by improving workers' health and safety. The present paper provides an example of the potential and opportunities of human-robot interaction and discusses how this approach can be included in a social innovation paradigm. Moreover, an example will be provided of an ongoing project funded by the Research Fund for Coal and Steel, "ROBOHARSH", which aims at implementing such approach in the steel industry, in order to develop a very sensitive task, i.e. the replacement of the refractory components of the ladle sliding gate.
Interpretation and Manipulation in Human Plans. Technical Report No. 317.
ERIC Educational Resources Information Center
Newman, Denis; Bruce, Bertram
Analysis of students' interpretations of a complex episode of social interaction was used to illustrate three features of human plans that distinguish them from robot plans and that form a basis for a theory of the development of social action. The features illustrated are that (1) human plans are social, (2) human plans operate on…
On the applicability of brain reading for predictive human-machine interfaces in robotics.
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.
On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors. PMID:24358125
Environmental interactions in space exploration: Environmental interactions working group
NASA Technical Reports Server (NTRS)
Kolecki, Joseph C.; Hillard, G. Barry
1992-01-01
With the advent of the Space Exploration Initiative, the possibility of designing and using systems on scales heretofore unattempted presents exciting new challenges in systems design and space science. The environments addressed by the Space Exploration Initiative include the surfaces of the Moon and Mars, as well as the varied plasma and field environments which will be encountered by humans and cargo enroute to these destinations. Systems designers will need to understand environmental interactions and be able to model these mechanisms from the earliest conceptual design stages through design completion. To the end of understanding environmental interactions and establishing robotic precursor mission requirements, an Environmental Interactions Working Group was established as part of the Robotic Missions Working Group. The working group is described, and its current activities are updated.
Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots
NASA Technical Reports Server (NTRS)
Chen, Vincent Wei-Kang
1992-01-01
Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.
Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain
2015-01-01
In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.
Hiolle, Antoine; Lewis, Matthew; Cañamero, Lola
2014-01-01
In the context of our work in developmental robotics regarding robot–human caregiver interactions, in this paper we investigate how a “baby” robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a “caregiver” to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two “idealized” robot profiles—a “needy” and an “independent” robot—in terms of their use of a caregiver as a means to regulate the “stress” (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their responsiveness—“responsive” and “non-responsive”—to the regulatory requests of the robot. Going beyond previous work, in this paper we (a) assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (b) bring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to adapt its regulatory behavior along the “needy” and “independent” axis as a function of the varying responsiveness of the caregiver; and (c) analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot. PMID:24860492
Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction
NASA Astrophysics Data System (ADS)
Sweet, Nicholas
Autonomous systems, particularly unmanned aerial systems (UAS), remain limited in au- tonomous capabilities largely due to a poor understanding of their environment. Current sensors simply do not match human perceptive capabilities, impeding progress towards full autonomy. Recent work has shown the value of humans as sources of information within a human-robot team; in target applications, communicating human-generated 'soft data' to autonomous systems enables higher levels of autonomy through large, efficient information gains. This requires development of a 'human sensor model' that allows soft data fusion through Bayesian inference to update the probabilistic belief representations maintained by autonomous systems. Current human sensor models that capture linguistic inputs as semantic information are limited in their ability to generalize likelihood functions for semantic statements: they may be learned from dense data; they do not exploit the contextual information embedded within groundings; and they often limit human input to restrictive and simplistic interfaces. This work provides mechanisms to synthesize human sensor models from constraints based on easily attainable a priori knowledge, develops compression techniques to capture information-dense semantics, and investigates the problem of capturing and fusing semantic information contained within unstructured natural language. A robotic experimental testbed is also developed to validate the above contributions.
2006-10-31
Articles: Danks , D. "Psychological Theories of Categorization as Probabilistic Graphical Models," Journal of Mathematical Psychology, submitted. Kyburg...and when there is no set of competent and authorized humans available to make the decisions themselves. Ultimately, it is a matter of expected utility
Design and evaluation of a trilateral shared-control architecture for teleoperated training robots.
Shamaei, Kamran; Kim, Lawrence H; Okamura, Allison M
2015-08-01
Multilateral teleoperated robots can be used to train humans to perform complex tasks that require collaborative interaction and expert supervision, such as laparoscopic surgical procedures. In this paper, we explain the design and performance evaluation of a shared-control architecture that can be used in trilateral teleoperated training robots. The architecture includes dominance and observation factors inspired by the determinants of motor learning in humans, including observational practice, focus of attention, feedback and augmented feedback, and self-controlled practice. Toward the validation of such an architecture, we (1) verify the stability of a trilateral system by applying Llewellyn's criterion on a two-port equivalent architecture, and (2) demonstrate that system transparency remains generally invariant across relevant observation factors and movement frequencies. In a preliminary experimental study, a dyad of two human users (one novice, one expert) collaborated on the control of a robot to follow a trajectory. The experiment showed that the framework can be used to modulate the efforts of the users and adjust the source and level of haptic feedback to the novice user.
Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz
2016-07-01
Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.
Tele-rehabilitation using in-house wearable ankle rehabilitation robot.
Jamwal, Prashant K; Hussain, Shahid; Mir-Nasiri, Nazim; Ghayesh, Mergen H; Xie, Sheng Q
2018-01-01
This article explores wide-ranging potential of the wearable ankle robot for in-house rehabilitation. The presented robot has been conceptualized following a brief analysis of the existing technologies, systems, and solutions for in-house physical ankle rehabilitation. Configuration design analysis and component selection for ankle robot have been discussed as part of the conceptual design. The complexities of human robot interaction are closely encountered while maneuvering a rehabilitation robot. We present a fuzzy logic-based controller to perform the required robot-assisted ankle rehabilitation treatment. Designs of visual haptic interfaces have also been discussed, which will make the treatment interesting, and the subject will be motivated to exert more and regain lost functions rapidly. The complex nature of web-based communication between user and remotely sitting physiotherapy staff has also been discussed. A high-level software architecture appended with robot ensures user-friendly operations. This software is made up of three important components: patient-related database, graphical user interface (GUI), and a library of exercises creating virtual reality-specifically developed for ankle rehabilitation.
Ao, Di; Song, Rong; Gao, JinWu
2017-08-01
Although the merits of electromyography (EMG)-based control of powered assistive systems have been certified, the factors that affect the performance of EMG-based human-robot cooperation, which are very important, have received little attention. This study investigates whether a more physiologically appropriate model could improve the performance of human-robot cooperation control for an ankle power-assist exoskeleton robot. To achieve the goal, an EMG-driven Hill-type neuromusculoskeletal model (HNM) and a linear proportional model (LPM) were developed and calibrated through maximum isometric voluntary dorsiflexion (MIVD). The two control models could estimate the real-time ankle joint torque, and HNM is more accurate and can account for the change of the joint angle and muscle dynamics. Then, eight healthy volunteers were recruited to wear the ankle exoskeleton robot and complete a series of sinusoidal tracking tasks in the vertical plane. With the various levels of assist based on the two calibrated models, the subjects were instructed to track the target displayed on the screen as accurately as possible by performing ankle dorsiflexion and plantarflexion. Two measurements, the root mean square error (RMSE) and root mean square jerk (RMSJ), were derived from the assistant torque and kinematic signals to characterize the movement performances, whereas the amplitudes of the recorded EMG signals from the tibialis anterior (TA) and the gastrocnemius (GAS) were obtained to reflect the muscular efforts. The results demonstrated that the muscular effort and smoothness of tracking movements decreased with an increase in the assistant ratio. Compared with LPM, subjects made lower physical efforts and generated smoother movements when using HNM, which implied that a more physiologically appropriate model could enable more natural and human-like human-robot cooperation and has potential value for improvement of human-exoskeleton interaction in future applications.
Assistive acting movement therapy devices with pneumatic rotary-type soft actuators.
Wilkening, André; Baiden, David; Ivlev, Oleg
2012-12-01
Inherent compliance and assistive behavior are assumed to be essential properties for safe human-robot interaction. Rehabilitation robots demand the highest standards in this respect because the machine interacts directly with weak persons who are often sensitive to pain. Using novel soft fluidic actuators with rotary elastic chambers (REC actuators), compact, lightweight, and cost-effective therapeutic devices can be developed. This article describes modular design and control strategies for new assistive acting robotic devices for upper and lower extremities. Due to the inherent compliance and natural back-drivability of pneumatic REC actuators, these movement therapy devices provide gentle treatment, whereby the interaction forces between humans and the therapy device are estimated without the use of expensive force/torque sensors. An active model-based gravity compensation based on separated models of the robot and of the individual patient's extremity provides the basis for effective assistive control. The utilization of pneumatic actuators demands a special safety concept, which is merged with control algorithms to provide a sufficient level of safeness and to catch any possible system errors and/or emergency situations. A self-explanatory user interface allows for easy, intuitive handling. Prototypes are very comfortable for use due to several control routines that work in the background. Assistive devices have been tested extensively with several healthy persons; the knee/hip movement therapy device is now under clinical trials at the Clinic for Orthopaedics and Trauma Surgery at the Klinikum Stuttgart.
Ma, Ye; Xie, Shengquan; Zhang, Yanxin
2016-03-01
A patient-specific electromyography (EMG)-driven neuromuscular model (PENm) is developed for the potential use of human-inspired gait rehabilitation robots. The PENm is modified based on the current EMG-driven models by decreasing the calculation time and ensuring good prediction accuracy. To ensure the calculation efficiency, the PENm is simplified into two EMG channels around one joint with minimal physiological parameters. In addition, a dynamic computation model is developed to achieve real-time calculation. To ensure the calculation accuracy, patient-specific muscle kinematics information, such as the musculotendon lengths and the muscle moment arms during the entire gait cycle, are employed based on the patient-specific musculoskeletal model. Moreover, an improved force-length-velocity relationship is implemented to generate accurate muscle forces. Gait analysis data including kinematics, ground reaction forces, and raw EMG signals from six adolescents at three different speeds were used to evaluate the PENm. The simulation results show that the PENm has the potential to predict accurate joint moment in real-time. The design of advanced human-robot interaction control strategies and human-inspired gait rehabilitation robots can benefit from the application of the human internal state provided by the PENm. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Best, Andrew; Kapalo, Katelynn A.; Warta, Samantha F.; Fiore, Stephen M.
2016-05-01
Human-robot teaming largely relies on the ability of machines to respond and relate to human social signals. Prior work in Social Signal Processing has drawn a distinction between social cues (discrete, observable features) and social signals (underlying meaning). For machines to attribute meaning to behavior, they must first understand some probabilistic relationship between the cues presented and the signal conveyed. Using data derived from a study in which participants identified a set of salient social signals in a simulated scenario and indicated the cues related to the perceived signals, we detail a learning algorithm, which clusters social cue observations and defines an "N-Most Likely States" set for each cluster. Since multiple signals may be co-present in a given simulation and a set of social cues often maps to multiple social signals, the "N-Most Likely States" approach provides a dramatic improvement over typical linear classifiers. We find that the target social signal appears in a "3 most-likely signals" set with up to 85% probability. This results in increased speed and accuracy on large amounts of data, which is critical for modeling social cognition mechanisms in robots to facilitate more natural human-robot interaction. These results also demonstrate the utility of such an approach in deployed scenarios where robots need to communicate with human teammates quickly and efficiently. In this paper, we detail our algorithm, comparative results, and offer potential applications for robot social signal detection and machine-aided human social signal detection.
Managing Workload in Human-Robot Interaction: A Review of Empirical Studies
2010-01-01
central concern in determining successful teleoperation. Regardless of the sophistication of the technology, a robot is oper- ated – with different levels...by many characteristics, including the type of workload manipulation, the apparatus used, task char- acteristics, and/or type of outcome measures . Due...linguistic patterns. Further- more, this interference may not even be detected if operators do not explicitly measure team communication performance, or re
Quantifying the human-robot interaction forces between a lower limb exoskeleton and healthy users.
Rathore, Ashish; Wilcox, Matthew; Ramirez, Dafne Zuleima Morgado; Loureiro, Rui; Carlson, Tom
2016-08-01
To counter the many disadvantages of prolonged wheelchair use, patients with spinal cord injuries (SCI) are beginning to turn towards robotic exoskeletons. However, we are currently unaware of the magnitude and distribution of forces acting between the user and the exoskeleton. This is a critical issue, as SCI patients have an increased susceptibility to skin lesions and pressure ulcer development. Therefore, we developed a real-time force measuring apparatus, which was placed at the physical human-robot interface (pHRI) of a lower limb robotic exoskeleton. Experiments captured the dynamics of these interaction forces whilst the participants performed a range of typical stepping actions. Our results indicate that peak forces occurred at the anterior aspect of both the left and right legs, areas that are particularly prone to pressure ulcer development. A significant difference was also found between the average force experienced at the anterior and posterior sensors of the right thigh during the swing phase for different movement primitives. These results call for the integration of instrumented straps as standard in lower limb exoskeletons. They also highlight the potential of such straps to be used as an alternative/complementary interface for the high-level control of lower limb exoskeletons in some patient groups.
Social Robots as Embedded Reinforcers of Social Behavior in Children with Autism
ERIC Educational Resources Information Center
Kim, Elizabeth S.; Berkovits, Lauren D.; Bernier, Emily P.; Leyzberg, Dan; Shic, Frederick; Paul, Rhea; Scassellati, Brian
2013-01-01
In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur…
Heuristic control of the Utah/MIT dextrous robot hand
NASA Technical Reports Server (NTRS)
Bass, Andrew H., Jr.
1987-01-01
Basic hand grips and sensor interactions that a dextrous robot hand will need as part of the operation of an EVA Retriever are analyzed. What is to be done with a dextrous robot hand is examined along with how such a complex machine might be controlled. It was assumed throughout that an anthropomorphic robot hand should perform tasks just as a human would; i.e., the most efficient approach to developing control strategies for the hand would be to model actual hand actions and do the same tasks in the same ways. Therefore, basic hand grips that human hands perform, as well as hand grip action were analyzed. It was also important to examine what is termed sensor fusion. This is the integration of various disparate sensor feedback paths. These feedback paths can be spatially and temporally separated, as well as, of different sensor types. Neural networks are seen as a means of integrating these varied sensor inputs and types. Basic heuristics of hand actions and grips were developed. These heuristics offer promise of control dextrous robot hands in a more natural and efficient way.
Sharing skills: using augmented reality for human-robot collaboration
NASA Astrophysics Data System (ADS)
Giesler, Bjorn; Steinhaus, Peter; Walther, Marcus; Dillmann, Ruediger
2004-05-01
Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at. To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.
Torres, Luis G; Kuntz, Alan; Gilbert, Hunter B; Swaney, Philip J; Hendrick, Richard J; Webster, Robert J; Alterovitz, Ron
2015-05-01
Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles.
A Multimodal Emotion Detection System during Human-Robot Interaction
Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.
2013-01-01
In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598
Use of spring-roll EAP actuator applied as end-effector of a hyper-redundant robot
NASA Astrophysics Data System (ADS)
Errico, Gianmarco; Fava, Victor; Resta, Ferruccio; Ripamonti, Francesco
2015-04-01
This paper presents a hyper-redundant continuous robot used to perform work in places which humans can not reach. This type of robot is generally a bio-inspired solution, it is composed by a lot of flexible segments driven by multiple actuators and its dynamics is described by a lot degrees of freedom. In this paper a model composed of some rigid links connected to each other by revolution joint is presented. In each link a torsional spring is added in order to simulate the resistant torque between the links and the interactions among the cables and the robot during the relative rotation. Moreover a type of EAP actuator, called spring roll, is used as the end-effector of the robot. Through a suitable sensor, such as a camera, the spring roll allows to track a target and it closes the control loop on the robot to follow it.
Fusing human and machine skills for remote robotic operations
NASA Technical Reports Server (NTRS)
Schenker, Paul S.; Kim, Won S.; Venema, Steven C.; Bejczy, Antal K.
1991-01-01
The question of how computer assists can improve teleoperator trajectory tracking during both free and force-constrained motions is addressed. Computer graphics techniques which enable the human operator to both visualize and predict detailed 3D trajectories in real-time are reported. Man-machine interactive control procedures for better management of manipulator contact forces and positioning are also described. It is found that collectively, these novel advanced teleoperations techniques both enhance system performance and significantly reduce control problems long associated with teleoperations under time delay. Ongoing robotic simulations of the 1984 space shuttle Solar Maximum EVA Repair Mission are briefly described.
Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study.
Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna; Andrade, Jackie
2018-05-03
Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. The aim of this study was to explore participants' qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot's head sensor. Evaluations were content-analyzed utilizing Boyatzis' steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. ©Joana Galvão Gomes da Silva, David J Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, Jackie Andrade. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.05.2018.
Baddoura, Ritta; Venture, Gentiane
2014-01-01
During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions.
Baddoura, Ritta; Venture, Gentiane
2014-01-01
During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions. PMID:24688466
Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari
2014-01-01
Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work.
Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari
2014-01-01
Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work. PMID:24949491
A Fully Sensorized Cooperative Robotic System for Surgical Interventions
Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.
2012-01-01
In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551
Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.
Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela
2015-12-01
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sawers, Andrew; Bhattacharjee, Tapomayukh; McKay, J Lucas; Hackney, Madeleine E; Kemp, Charles C; Ting, Lena H
2017-01-31
Physical interactions between two people are ubiquitous in our daily lives, and an integral part of many forms of rehabilitation. However, few studies have investigated forces arising from physical interactions between humans during a cooperative motor task, particularly during overground movements. As such, the direction and magnitude of interaction forces between two human partners, how those forces are used to communicate movement goals, and whether they change with motor experience remains unknown. A better understanding of how cooperative physical interactions are achieved in healthy individuals of different skill levels is a first step toward understanding principles of physical interactions that could be applied to robotic devices for motor assistance and rehabilitation. Interaction forces between expert and novice partner dancers were recorded while performing a forward-backward partnered stepping task with assigned "leader" and "follower" roles. Their position was recorded using motion capture. The magnitude and direction of the interaction forces were analyzed and compared across groups (i.e. expert-expert, expert-novice, and novice-novice) and across movement phases (i.e. forward, backward, change of direction). All dyads were able to perform the partnered stepping task with some level of proficiency. Relatively small interaction forces (10-30N) were observed across all dyads, but were significantly larger among expert-expert dyads. Interaction forces were also found to be significantly different across movement phases. However, interaction force magnitude did not change as whole-body synchronization between partners improved across trials. Relatively small interaction forces may communicate movement goals (i.e. "what to do and when to do it") between human partners during cooperative physical interactions. Moreover, these small interactions forces vary with prior motor experience, and may act primarily as guiding cues that convey information about movement goals rather than providing physical assistance. This suggests that robots may be able to provide meaningful physical interactions for rehabilitation using relatively small force levels.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
Robonaut Mobile Autonomy: Initial Experiments
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Goza, S. M.; Tyree, K. S.; Huber, E. L.
2006-01-01
A mobile version of the NASA/DARPA Robonaut humanoid recently completed initial autonomy trials working directly with humans in cluttered environments. This compact robot combines the upper body of the Robonaut system with a Segway Robotic Mobility Platform yielding a dexterous, maneuverable humanoid ideal for interacting with human co-workers in a range of environments. This system uses stereovision to locate human teammates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form complex behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Robotics and neuroscience: a rhythmic interaction.
Ronsse, Renaud; Lefèvre, Philippe; Sepulchre, Rodolphe
2008-05-01
At the crossing between motor control neuroscience and robotics system theory, the paper presents a rhythmic experiment that is amenable both to handy laboratory implementation and simple mathematical modeling. The experiment is based on an impact juggling task, requiring the coordination of two upper-limb effectors and some phase-locking with the trajectories of one or several juggled objects. We describe the experiment, its implementation and the mathematical model used for the analysis. Our underlying research focuses on the role of sensory feedback in rhythmic tasks. In a robotic implementation of our experiment, we study the minimum feedback that is required to achieve robust control. A limited source of feedback, measuring only the impact times, is shown to give promising results. A second field of investigation concerns the human behavior in the same impact juggling task. We study how a variation of the tempo induces a transition between two distinct control strategies with different sensory feedback requirements. Analogies and differences between the robotic and human behaviors are obviously of high relevance in such a flexible setup.
NASA Astrophysics Data System (ADS)
Almubarak, Yara; Tadesse, Yonas
2017-04-01
The potential applications of humanoid robots in social environments, motivates researchers to design, and control biomimetic humanoid robots. Generally, people are more interested to interact with robots that have similar attributes and movements to humans. The head is one of most important part of any social robot. Currently, most humanoid heads use electrical motors, pneumatic actuators, and shape memory alloy (SMA) actuators for actuation. Electrical and pneumatic actuators take most of the space and would cause unsmooth motions. SMAs are expensive to use in humanoids. Recently, in many robotic projects, Twisted and Coiled Polymer (TCP) artificial muscles are used as linear actuators which take up little space compared to the motors. In this paper, we will demonstrate the designing process and motion control of a robotic head with TCP muscles. Servo motors and artificial muscles are used for actuating the head motion, which have been controlled by a cost efficient ARM Cortex-M7 based development board. A complete comparison between the two actuators is presented.
System-level challenges in pressure-operated soft robotics
NASA Astrophysics Data System (ADS)
Onal, Cagdas D.
2016-05-01
Last decade witnessed the revival of fluidic soft actuation. As pressure-operated soft robotics becomes more popular with promising recent results, system integration remains an outstanding challenge. Inspired greatly by biology, we envision future robotic systems to embrace mechanical compliance with bodies composed of soft and hard components as well as electronic and sensing sub-systems, such that robot maintenance starts to resemble surgery. In this vision, portable energy sources and driving infrastructure plays a key role to offer autonomous many-DoF soft actuation. On the other hand, while offering many advantages in safety and adaptability to interact with unstructured environments, objects, and human bodies, mechanical compliance also violates many inherent assumptions in traditional rigid-body robotics. Thus, a complete soft robotic system requires new approaches to utilize proprioception that provides rich sensory information while remaining flexible, and motion control under significant time delay. This paper discusses our proposed solutions for each of these system-level challenges in soft robotics research.
The use of robots for arms control treaty verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalowski, S.J.
1991-01-01
Many aspects of the superpower relationship now present a new set of challenges and opportunities, including the vital area of arms control. This report addresses one such possibility: the use of robots for the verification of arms control treaties. The central idea of this report is far from commonly-accepted. In fact, it was only encountered once in bibliographic review phase of the project. Nonetheless, the incentive for using robots is simple and coincides with that of industrial applications: to replace or supplement human activity in the performance of tasks for which human participation is unnecessary, undesirable, impossible, too dangerous ormore » too expensive. As in industry, robots should replace workers (in this case, arms control inspectors) only when questions of efficiency, reliability, safety, security and cost-effectiveness have been answered satisfactorily. In writing this report, it is not our purpose to strongly advocate the application of robots in verification. Rather, we wish to explore the significant aspects, pro and con, of applying experience from the field of flexible automation to the complex task of assuring arms control treaty compliance. We want to establish a framework for further discussion of this topic and to define criteria for evaluating future proposals. The authors' expertise is in robots, not arms control. His practical experience has been in developing systems for use in the rehabilitation of severely disabled persons (such as quadriplegics), who can use robots for assistance during activities of everyday living, as well as in vocational applications. This creates a special interest in implementations that, in some way, include a human operator in the control scheme of the robot. As we hope to show in this report, such as interactive systems offer the greatest promise of making a contribution to the challenging problems of treaty verification. 15 refs.« less
Mobility Performance Algorithms for Small Unmanned Ground Vehicles
2009-05-01
obstacles need to be developed; specifically, models and data for wheeled vehicle skid steering, interior building floor and roof surfaces, and stair ...an 80-lb SUGV; PackBot® at 50 lb, and GatorTM at 2500 lb. Additionally, the FCS projects that 40% of the military fleet may eventually be robotic ...sensor input analysis and decision-making time. Fields (2002a) discusses representing interaction of humans and robots in the OneSAF Testbed Baseline
Flexible automation of cell culture and tissue engineering tasks.
Knoll, Alois; Scherer, Torsten; Poggendorf, Iris; Lütkemeyer, Dirk; Lehmann, Jürgen
2004-01-01
Until now, the predominant use cases of industrial robots have been routine handling tasks in the automotive industry. In biotechnology and tissue engineering, in contrast, only very few tasks have been automated with robots. New developments in robot platform and robot sensor technology, however, make it possible to automate plants that largely depend on human interaction with the production process, e.g., for material and cell culture fluid handling, transportation, operation of equipment, and maintenance. In this paper we present a robot system that lends itself to automating routine tasks in biotechnology but also has the potential to automate other production facilities that are similar in process structure. After motivating the design goals, we describe the system and its operation, illustrate sample runs, and give an assessment of the advantages. We conclude this paper by giving an outlook on possible further developments.
Characterization of large-area pressure sensitive robot skin
NASA Astrophysics Data System (ADS)
Saadatzi, Mohammad Nasser; Baptist, Joshua R.; Wijayasinghe, Indika B.; Popa, Dan O.
2017-05-01
Sensorized robot skin has considerable promise to enhance robots' tactile perception of surrounding environments. For physical human-robot interaction (pHRI) or autonomous manipulation, a high spatial sensor density is required, typically driven by the skin location on the robot. In our previous study, a 4x4 flexible array of strain sensors were printed and packaged onto Kapton sheets and silicone encapsulants. In this paper, we are extending the surface area of the patch to larger arrays with up to 128 tactel elements. To address scalability, sensitivity, and calibration challenges, a novel electronic module, free of the traditional signal conditioning circuitry was created. The electronic design relies on a software-based calibration scheme using high-resolution analog-to-digital converters with internal programmable gain amplifiers. In this paper, we first show the efficacy of the proposed method with a 4x4 skin array using controlled pressure tests, and then perform procedures to evaluate each sensor's characteristics such as dynamic force-to-strain property, repeatability, and signal-to-noise-ratio. In order to handle larger sensor surfaces, an automated force-controlled test cycle was carried out. Results demonstrate that our approach leads to reliable and efficient methods for extracting tactile models for use in future interaction with collaborative robots.
Human interaction with robotic systems: performance and workload evaluations.
Reinerman-Jones, L; Barber, D J; Szalma, J L; Hancock, P A
2017-10-01
We first tested the effect of differing tactile informational forms (i.e. directional cues vs. static cues vs. dynamic cues) on objective performance and perceived workload in a collaborative human-robot task. A second experiment evaluated the influence of task load and informational message type (i.e. single words vs. grouped phrases) on that same collaborative task. In both experiments, the relationship of personal characteristics (attentional control and spatial ability) to performance and workload was also measured. In addition to objective performance and self-report of cognitive load, we evaluated different physiological responses in each experiment. Results showed a performance-workload association for directional cues, message type and task load. EEG measures however, proved generally insensitive to such task load manipulations. Where significant EEG effects were observed, right hemisphere amplitude differences predominated, although unexpectedly these latter relationships were negative. Although EEG measures were partially associated with performance, they appear to possess limited utility as measures of workload in association with tactile displays. Practitioner Summary: As practitioners look to take advantage of innovative tactile displays in complex operational realms like human-robotic interaction, associated performance effects are mediated by cognitive workload. Despite some patterns of association, reliable reflections of operator state can be difficult to discern and employ as the number, complexity and sophistication of these respective measures themselves increase.
Vicentini, Federico; Pedrocchi, Nicola; Malosio, Matteo; Molinari Tosatti, Lorenzo
2014-09-01
Robot-assisted neurorehabilitation often involves networked systems of sensors ("sensory rooms") and powerful devices in physical interaction with weak users. Safety is unquestionably a primary concern. Some lightweight robot platforms and devices designed on purpose include safety properties using redundant sensors or intrinsic safety design (e.g. compliance and backdrivability, limited exchange of energy). Nonetheless, the entire "sensory room" shall be required to be fail-safe and safely monitored as a system at large. Yet, sensor capabilities and control algorithms used in functional therapies require, in general, frequent updates or re-configurations, making a safety-grade release of such devices hardly sustainable in cost-effectiveness and development time. As such, promising integrated platforms for human-in-the-loop therapies could not find clinical application and manufacturing support because of lacking in the maintenance of global fail-safe properties. Under the general context of cross-machinery safety standards, the paper presents a methodology called SafeNet for helping in extending the safety rate of Human Robot Interaction (HRI) systems using unsafe components, including sensors and controllers. SafeNet considers, in fact, the robotic system as a device at large and applies the principles of functional safety (as in ISO 13489-1) through a set of architectural procedures and implementation rules. The enabled capability of monitoring a network of unsafe devices through redundant computational nodes, allows the usage of any custom sensors and algorithms, usually planned and assembled at therapy planning-time rather than at platform design-time. A case study is presented with an actual implementation of the proposed methodology. A specific architectural solution is applied to an example of robot-assisted upper-limb rehabilitation with online motion tracking. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A universal ankle-foot prosthesis emulator for human locomotion experiments.
Caputo, Joshua M; Collins, Steven H
2014-03-01
Robotic prostheses have the potential to significantly improve mobility for people with lower-limb amputation. Humans exhibit complex responses to mechanical interactions with these devices, however, and computational models are not yet able to predict such responses meaningfully. Experiments therefore play a critical role in development, but have been limited by the use of product-like prototypes, each requiring years of development and specialized for a narrow range of functions. Here we describe a robotic ankle-foot prosthesis system that enables rapid exploration of a wide range of dynamical behaviors in experiments with human subjects. This emulator comprises powerful off-board motor and control hardware, a flexible Bowden cable tether, and a lightweight instrumented prosthesis, resulting in a combination of low mass worn by the human (0.96 kg) and high mechatronic performance compared to prior platforms. Benchtop tests demonstrated closed-loop torque bandwidth of 17 Hz, peak torque of 175 Nm, and peak power of 1.0 kW. Tests with an anthropomorphic pendulum "leg" demonstrated low interference from the tether, less than 1 Nm about the hip. This combination of low worn mass, high bandwidth, high torque, and unrestricted movement makes the platform exceptionally versatile. To demonstrate suitability for human experiments, we performed preliminary tests in which a subject with unilateral transtibial amputation walked on a treadmill at 1.25 ms-1 while the prosthesis behaved in various ways. These tests revealed low torque tracking error (RMS error of 2.8 Nm) and the capacity to systematically vary work production or absorption across a broad range (from -5 to 21 J per step). These results support the use of robotic emulators during early stage assessment of proposed device functionalities and for scientific study of fundamental aspects of human-robot interaction. The design of simple, alternate end-effectors would enable studies at other joints or with additional degrees of freedom.
Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel
2017-01-01
The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a “control software” for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction. PMID:28713296
Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel
2017-01-01
The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a "control software" for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction.
Language for action: Motor resonance during the processing of human and robotic voices.
Di Cesare, G; Errante, A; Marchi, M; Cuccio, V
2017-11-01
In this fMRI study we evaluated whether the auditory processing of action verbs pronounced by a human or a robotic voice in the imperative mood differently modulates the activation of the mirror neuron system (MNs). The study produced three results. First, the activation pattern found during listening to action verbs was very similar in both the robot and human conditions. Second, the processing of action verbs compared to abstract verbs determined the activation of the fronto-parietal circuit classically involved during the action goal understanding. Third, and most importantly, listening to action verbs compared to abstract verbs produced activation of the anterior part of the supramarginal gyrus (aSMG) regardless of the condition (human and robot) and in the absence of any object name. The supramarginal gyrus is a region considered to underpin hand-object interaction and associated to the processing of affordances. These results suggest that listening to action verbs may trigger the recruitment of motor representations characterizing affordances and action execution, coherently with the predictive nature of motor simulation that not only allows us to re-enact motor knowledge to understand others' actions but also prepares us for the actions we might need to carry out. Copyright © 2017 Elsevier Inc. All rights reserved.
Co-development of manner and path concepts in language, action, and eye-gaze behavior.
Lohan, Katrin S; Griffiths, Sascha S; Sciutti, Alessandra; Partmann, Tim C; Rohlfing, Katharina J
2014-07-01
In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate children's behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human-robot interaction. Copyright © 2014 Cognitive Science Society, Inc.
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.
Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong
2014-09-01
Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Ying; Yu, Miao; Bruck, Hugh A.; Smela, Elisabeth
2018-06-01
To allow robots to interact with humans via touch, new sensing concepts are needed that can detect a wide range of potential interactions and cover the body of a robot. In this paper, a skin-inspired multi-layer tactile sensing architecture is presented and characterized. The structure consists of stretchable piezoresistive strain-sensing layers over foam layers of different stiffness, allowing for both sufficient sensitivity and pressure range for human contacts. Strip-shaped sensors were used in this architecture to produce a deformation response proportional to pressure. The roles of the foam layers were elucidated by changing their stiffness and thickness, allowing the development of a geometric model to account for indenter interactions with the structure. The advantage of this architecture over other approaches is the ability to easily tune performance by adjusting the stiffness or thickness of the foams to tailor the response for different applications. Since viscoelastic materials were used, the temporal effects were also investigated.
2017-02-01
DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT FLORIDA INSTITUTE FOR HUMAN AND...AND SUBTITLE DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT 5a. CONTRACT NUMBER...Human and Machine Cognition (IHMC) from 2012-2016 through three phases of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge
Design of a Lightweight Soft Robotic Arm Using Pneumatic Artificial Muscles and Inflatable Sleeves.
Ohta, Preston; Valle, Luis; King, Jonathan; Low, Kevin; Yi, Jaehyun; Atkeson, Christopher G; Park, Yong-Lae
2018-04-01
As robots begin to interact with humans and operate in human environments, safety becomes a major concern. Conventional robots, although reliable and consistent, can cause injury to anyone within its range of motion. Soft robotics, wherein systems are made to be soft and mechanically compliant, are thus a promising alternative due to their lightweight nature and ability to cushion impacts, but current designs often sacrifice accuracy and usefulness for safety. We, therefore, have developed a bioinspired robotic arm combining elements of rigid and soft robotics such that it exhibits the positive qualities of both, namely compliance and accuracy, while maintaining a low weight. This article describes the design of a robotic arm-wrist-hand system with seven degrees of freedom (DOFs). The shoulder and elbow each has two DOFs for two perpendicular rotational motions on each joint, and the hand has two DOFs for wrist rotations and one DOF for a grasp motion. The arm is pneumatically powered using custom-built McKibben type pneumatic artificial muscles, which are inflated and deflated using binary and proportional valves. The wrist and hand motions are actuated through servomotors. In addition to the actuators, the arm is equipped with a potentiometer in each joint for detecting joint angle changes. Simulation and experimental results for closed-loop position control are also presented in the article.
Real time gesture based control: A prototype development
NASA Astrophysics Data System (ADS)
Bhargava, Deepshikha; Solanki, L.; Rai, Satish Kumar
2016-03-01
The computer industry is getting advanced. In a short span of years, industry is growing high with advanced techniques. Robots have been replacing humans, increasing the efficiency, accessibility and accuracy of the system and creating man-machine interaction. Robotic industry is developing many new trends. However, they still need to be controlled by humans itself. This paper presents an approach to control a motor like a robot with hand gestures not by old ways like buttons or physical devices. Controlling robots with hand gestures is very popular now-a-days. Currently, at this level, gesture features are applied for detecting and tracking the hand in real time. A principal component analysis algorithm is being used for identification of a hand gesture by using open CV image processing library. Contours, convex-hull, and convexity defects are the gesture features. PCA is a statistical approach used for reducing the number of variables in hand recognition. While extracting the most relevant information (feature) contained in the images (hand). After detecting and recognizing hand a servo motor is being controlled, which uses hand gesture as an input device (like mouse and keyboard), and reduces human efforts.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms.
Athanasiou, Alkinoos; Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas; Astaras, Alexander; Bamidis, Panagiotis D
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms
Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality. PMID:28948168
Human-Robot Cooperation with Commands Embedded in Actions
NASA Astrophysics Data System (ADS)
Kobayashi, Kazuki; Yamada, Seiji
In this paper, we first propose a novel interaction model, CEA (Commands Embedded in Actions). It can explain the way how some existing systems reduce the work-load of their user. We next extend the CEA and build ECEA (Extended CEA) model. The ECEA enables robots to achieve more complicated tasks. On this extension, we employ ACS (Action Coding System) which can describe segmented human acts and clarifies the relationship between user's actions and robot's actions in a task. The ACS utilizes the CEA's strong point which enables a user to send a command to a robot by his/her natural action for the task. The instance of the ECEA led by using the ACS is a temporal extension which has the user keep a final state of a previous his/her action. We apply the temporal extension of the ECEA for a sweeping task. The high-level task, a cooperative task between the user and the robot can be realized. The robot with simple reactive behavior can sweep the region of under an object when the user picks up the object. In addition, we measure user's cognitive loads on the ECEA and a traditional method, DCM (Direct Commanding Method) in the sweeping task, and compare between them. The results show that the ECEA has a lower cognitive load than the DCM significantly.
Tsai, Tzung-Cheng; Hsu, Yeh-Liang; Ma, An-I; King, Trevor; Wu, Chang-Huei
2007-08-01
"Telepresence" is an interesting field that includes virtual reality implementations with human-system interfaces, communication technologies, and robotics. This paper describes the development of a telepresence robot called Telepresence Robot for Interpersonal Communication (TRIC) for the purpose of interpersonal communication with the elderly in a home environment. The main aim behind TRIC's development is to allow elderly populations to remain in their home environments, while loved ones and caregivers are able to maintain a higher level of communication and monitoring than via traditional methods. TRIC aims to be a low-cost, lightweight robot, which can be easily implemented in the home environment. Under this goal, decisions on the design elements included are discussed. In particular, the implementation of key autonomous behaviors in TRIC to increase the user's capability of projection of self and operation of the telepresence robot, in addition to increasing the interactive capability of the participant as a dialogist are emphasized. The technical development and integration of the modules in TRIC, as well as human factors considerations are then described. Preliminary functional tests show that new users were able to effectively navigate TRIC and easily locate visual targets. Finally the future developments of TRIC, especially the possibility of using TRIC for home tele-health monitoring and tele-homecare visits are discussed.
Fronto-parietal coding of goal-directed actions performed by artificial agents.
Kupferberg, Aleksandra; Iacoboni, Marco; Flanagin, Virginia; Huber, Markus; Kasparbauer, Anna; Baumgartner, Thomas; Hasler, Gregor; Schmidt, Florian; Borst, Christoph; Glasauer, Stefan
2018-03-01
With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment. © 2017 Wiley Periodicals, Inc.
Barber, Daniel J; Reinerman-Jones, Lauren E; Matthews, Gerald
2015-05-01
Two experiments were performed to investigate the feasibility for robot-to-human communication of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence. Improvements in autonomous systems technology and a growing demand within military operations are spurring interest in communication via vibrotactile displays. Tactile communication may become an important element of human-robot interaction (HRI), but it requires the development of messaging capabilities approaching the communication power of the speech and visual signals used in the military. In Experiment 1 (N = 38), we trained participants to identify sets of directional, dynamic, and static tactons and tested performance and workload following training. In Experiment 2 (N = 76), we introduced an extended training procedure and tested participants' ability to correctly identify two-tacton phrases. We also investigated the impact of multitasking on performance and workload. Individual difference factors were assessed. Experiment 1 showed that participants found dynamic and static tactons difficult to learn, but the enhanced training procedure in Experiment 2 produced competency in performance for all tacton categories. Participants in the latter study also performed well on two-tacton phrases and when multitasking. However, some deficits in performance and elevation of workload were observed. Spatial ability predicted some aspects of performance in both studies. Participants may be trained to identify both single tactons and tacton phrases, demonstrating the feasibility of developing a tactile language for HRI. Tactile communication may be incorporated into multi-modal communication systems for HRI. It also has potential for human-human communication in challenging environments. © 2014, Human Factors and Ergonomics Society.
Position calibration of a 3-DOF hand-controller with hybrid structure
NASA Astrophysics Data System (ADS)
Zhu, Chengcheng; Song, Aiguo
2017-09-01
A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.
The flight robotics laboratory
NASA Technical Reports Server (NTRS)
Tobbe, Patrick A.; Williamson, Marlin J.; Glaese, John R.
1988-01-01
The Flight Robotics Laboratory of the Marshall Space Flight Center is described in detail. This facility, containing an eight degree of freedom manipulator, precision air bearing floor, teleoperated motion base, reconfigurable operator's console, and VAX 11/750 computer system, provides simulation capability to study human/system interactions of remote systems. The facility hardware, software and subsequent integration of these components into a real time man-in-the-loop simulation for the evaluation of spacecraft contact proximity and dynamics are described.
Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain
2015-01-01
In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning. PMID:26485148
Agent-based human-robot interaction of a combat bulldozer
NASA Astrophysics Data System (ADS)
Granot, Reuven; Feldman, Maxim
2004-09-01
A small-scale supervised autonomous bulldozer in a remote site was developed to experience agent based human intervention. The model is based on Lego Mindstorms kit and represents combat equipment, whose job performance does not require high accuracy. The model enables evaluation of system response for different operator interventions, as well as for a small colony of semiautonomous dozers. The supervising human may better react than a fully autonomous system to unexpected contingent events, which are a major barrier to implement full autonomy. The automation is introduced as improved Man Machine Interface (MMI) by developing control agents as intelligent tools to negotiate between human requests and task level controllers as well as negotiate with other elements of the software environment. Current UGVs demand significant communication resources and constant human operation. Therefore they will be replaced by semi-autonomous human supervisory controlled systems (telerobotic). For human intervention at the low layers of the control hierarchy we suggest a task oriented control agent to take care of the fluent transition between the state in which the robot operates and the one imposed by the human. This transition should take care about the imperfections, which are responsible for the improper operation of the robot, by disconnecting or adapting them to the new situation. Preliminary conclusions from the small-scale experiments are presented.
Interpretation and Manipulation in Human Plans.
ERIC Educational Resources Information Center
Newman, Denis; Bruce, Bertram C.
1986-01-01
Uses an analysis of children's interpretations of a complex episode of social interaction to illustrate three features that distinguish them from robot plans and that form a basis for a theory of the development of social action: human plans (1) are social, (2) operate on interpretations, and (3) are used, not just executed. (FL)
Robotics for Human Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Deans, Mathew; Bualat, Maria
2013-01-01
Robots can do a variety of work to increase the productivity of human explorers. Robots can perform tasks that are tedious, highly repetitive or long-duration. Robots can perform precursor tasks, such as reconnaissance, which help prepare for future human activity. Robots can work in support of astronauts, assisting or performing tasks in parallel. Robots can also perform "follow-up" work, completing tasks designated or started by humans. In this paper, we summarize the development and testing of robots designed to improve future human exploration of space.
Towards a sustainable modular robot system for planetary exploration
NASA Astrophysics Data System (ADS)
Hossain, S. G. M.
This thesis investigates multiple perspectives of developing an unmanned robotic system suited for planetary terrains. In this case, the unmanned system consists of unit-modular robots. This type of robot has potential to be developed and maintained as a sustainable multi-robot system while located far from direct human intervention. Some characteristics that make this possible are: the cooperation, communication and connectivity among the robot modules, flexibility of individual robot modules, capability of self-healing in the case of a failed module and the ability to generate multiple gaits by means of reconfiguration. To demonstrate the effects of high flexibility of an individual robot module, multiple modules of a four-degree-of-freedom unit-modular robot were developed. The robot was equipped with a novel connector mechanism that made self-healing possible. Also, design strategies included the use of series elastic actuators for better robot-terrain interaction. In addition, various locomotion gaits were generated and explored using the robot modules, which is essential for a modular robot system to achieve robustness and thus successfully navigate and function in a planetary environment. To investigate multi-robot task completion, a biomimetic cooperative load transportation algorithm was developed and simulated. Also, a liquid motion-inspired theory was developed consisting of a large number of robot modules. This can be used to traverse obstacles that inevitably occur in maneuvering over rough terrains such as in a planetary exploration. Keywords: Modular robot, cooperative robots, biomimetics, planetary exploration, sustainability.
The Role of Autobiographical Memory in the Development of a Robot Self
Pointeau, Gregoire; Dominey, Peter Ford
2017-01-01
This article briefly reviews research in cognitive development concerning the nature of the human self. It then reviews research in developmental robotics that has attempted to retrace parts of the developmental trajectory of the self. This should be of interest to developmental psychologists, and researchers in developmental robotics. As a point of departure, one of the most characteristic aspects of human social interaction is cooperation—the process of entering into a joint enterprise to achieve a common goal. Fundamental to this ability to cooperate is the underlying ability to enter into, and engage in, a self-other relation. This suggests that if we intend for robots to cooperate with humans, then to some extent robots must engage in these self-other relations, and hence they must have some aspect of a self. Decades of research in human cognitive development indicate that the self is not fully present from the outset, but rather that it is developed in a usage-based fashion, that is, through engaging with the world, including the physical world and the social world of animate intentional agents. In an effort to characterize the self, Ulric Neisser noted that self is not unitary, and he thus proposed five types of self-knowledge that correspond to five distinct components of self: ecological, interpersonal, conceptual, temporally extended, and private. He emphasized the ecological nature of each of these levels, how they are developed through the engagement of the developing child with the physical and interpersonal worlds. Crucially, development of the self has been shown to rely on the child's autobiographical memory. From the developmental robotics perspective, this suggests that in principal it would be possible to develop certain aspects of self in a robot cognitive system where the robot is engaged in the physical and social world, equipped with an autobiographical memory system. We review a series of developmental robotics studies that make progress in this enterprise. We conclude with a summary of the properties that are required for the development of these different levels of self, and we identify topics for future research. PMID:28676751
NASA Technical Reports Server (NTRS)
Abell, P. A.; Rivkin, A. S.
2015-01-01
Introduction: Robotic reconnaissance missions to small bodies will directly address aspects of NASA's Asteroid Initiative and will contribute to future human exploration. The NASA Asteroid Initiative is comprised of two major components: the Grand Challenge and the Asteroid Mission. The first component, the Grand Challenge, focuses on protecting Earth's population from asteroid impacts by detecting potentially hazardous objects with enough warning time to either prevent them from impacting the planet, or to implement civil defense procedures. The Asteroid Mission involves sending astronauts to study and sample a near- Earth asteroid (NEA) prior to conducting exploration missions of the Martian system, which includes Phobos and Deimos. The science and technical data obtained from robotic precursor missions that investigate the surface and interior physical characteristics of an object will help identify the pertinent physical properties that will maximize operational efficiency and reduce mission risk for both robotic assets and crew operating in close proximity to, or at the surface of, a small body. These data will help fill crucial strategic knowledge gaps (SKGs) concerning asteroid physical characteristics that are relevant for human exploration considerations at similar small body destinations. Small Body Strategic Knowledge Gaps: For the past several years NASA has been interested in identifying the key SKGs related to future human destinations. These SKGs highlight the various unknowns and/or data gaps of targets that the science and engineering communities would like to have filled in prior to committing crews to explore the Solar System. An action team from the Small Bodies Assessment Group (SBAG) was formed specifically to identify the small body SKGs under the direction of the Human Exploration and Operations Missions Directorate (HEOMD), given NASA's recent interest in NEAs and the Martian moons as potential human destinations [1]. The action team organized the SKGs into four broad themes: 1) Identify human mission targets; 2) Understand how to work on and interact with the small body surface; 3) Understand the small body environment and its potential risk/benefit to crew, systems, and operational assets; and 4) Understand the small body resource potential. Each of these themes were then further subdivided into categories to address specific SKG issues. Robotic Precursor Contributions to SKGs: Robotic reconnaissance missions should be able to address specific aspects related to SKG themes 1 through 4. Theme 1 deals with the identification of human mission targets within the NEA population. The current guideline indicates that human missions to fastspinning, tumbling, or binary asteroids may be too risky to conduct successfully from an operational perspective. However, no spacecraft mission has been to any of these types of NEAs before. Theme 2 addresses the concerns about interacting on the small body surface under microgravity conditions, and how the surface and/or sub-surface properties affect or restrict the interaction for human exploration. The combination of remote sensing instruments and in situ payloads will provide good insight into the asteroid's surface and subsurface properties. SKG theme 3 deals with the environment in and around the small body that may present a nuisance or hazard to any assets operating in close proximity. Impact and surface experiments will help address issues related to particle size, particle longevity, internal structure, and the near-surface mechanical stability of the asteroid. Understanding or constraining these physical characteristics are important for mission planning. Theme 4 addresses the resource potential of the small body. This is a particularly important aspect of human exploration since the identification and utilization of resources is a key aspect for deep space mission architectures to the Martian system (i.e., Phobos and Deimos). Conclusions: Robotic reconnaissance of small bodies can provide a wealth of information relevant to the science and planetary defense of NEAs. However, such missions to investigate NEAs can also provide key insights into small body strategic knowledge gaps and contribute to the overall success for human exploration missions to asteroids.
The shaping of social perception by stimulus and knowledge cues to human animacy
Ramsey, Richard; Liepelt, Roman; Prinz, Wolfgang; Hamilton, Antonia F. de C.
2016-01-01
Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self–other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design. PMID:26644594
The shaping of social perception by stimulus and knowledge cues to human animacy.
Cross, Emily S; Ramsey, Richard; Liepelt, Roman; Prinz, Wolfgang; de C Hamilton, Antonia F
2016-01-19
Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self-other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design. © 2015 The Authors.
NASA Astrophysics Data System (ADS)
Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques
2005-06-01
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.
Toward a practical mobile robotic aid system for people with severe physical disabilities.
Regalbuto, M A; Krouskop, T A; Cheatham, J B
1992-01-01
A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.
NASA Astrophysics Data System (ADS)
Ososky, Scott; Sanders, Tracy; Jentsch, Florian; Hancock, Peter; Chen, Jessie Y. C.
2014-06-01
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans' trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system's action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans' impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human-robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human-robot communication and human mental models regarding robots impact a human's ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.
Master-slave robotic system for needle indentation and insertion.
Shin, Jaehyun; Zhong, Yongmin; Gu, Chengfan
2017-12-01
Bilateral control of a master-slave robotic system is a challenging issue in robotic-assisted minimally invasive surgery. It requires the knowledge on contact interaction between a surgical (slave) robot and soft tissues. This paper presents a master-slave robotic system for needle indentation and insertion. This master-slave robotic system is able to characterize the contact interaction between the robotic needle and soft tissues. A bilateral controller is implemented using a linear motor for robotic needle indentation and insertion. A new nonlinear state observer is developed to online monitor the contact interaction with soft tissues. Experimental results demonstrate the efficacy of the proposed master-slave robotic system for robotic needle indentation and needle insertion.