Złotowski, Jakub A.; Sumioka, Hidenobu; Nishio, Shuichi; Glas, Dylan F.; Bartneck, Christoph; Ishiguro, Hiroshi
2015-01-01
The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what, if any, impact an uncanny-looking robot will have in the context of an interaction. In this paper we describe an exploratory empirical study using a live interaction paradigm that involved repeated interactions with robots that differed in embodiment and their attitude toward a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, merely repeating interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon. PMID:26175702
Złotowski, Jakub A; Sumioka, Hidenobu; Nishio, Shuichi; Glas, Dylan F; Bartneck, Christoph; Ishiguro, Hiroshi
2015-01-01
The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what, if any, impact an uncanny-looking robot will have in the context of an interaction. In this paper we describe an exploratory empirical study using a live interaction paradigm that involved repeated interactions with robots that differed in embodiment and their attitude toward a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, merely repeating interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.
Compensating for telecommunication delays during robotic telerehabilitation.
Consoni, Leonardo J; Siqueira, Adriano A G; Krebs, Hermano I
2017-07-01
Rehabilitation robotic systems may afford better care and telerehabilitation may extend the use and benefits of robotic therapy to the home. Data transmissions over distance are bound by intrinsic communication delays which can be significant enough to deem the activity unfeasible. Here we describe an approach that combines unilateral robotic telerehabilitation and serious games. This approach has a modular and distributed design that permits different types of robots to interact without substantial code changes. We demonstrate the approach through an online multiplayer game. Two users can remotely interact with each other with no force exchanges, while a smoothing and prediction algorithm compensates motions for the delay in the Internet connection. We demonstrate that this approach can successfully compensate for data transmission delays, even when testing between the United States and Brazil. This paper presents the initial experimental results, which highlight the performance degradation with increasing delays as well as improvements provided by the proposed algorithm, and discusses planned future developments.
System for exchanging tools and end effectors on a robot
Burry, David B.; Williams, Paul M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot.
System for exchanging tools and end effectors on a robot
Burry, D.B.; Williams, P.M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot. 12 figures.
Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.
Borenstein, Jason; Arkin, Ron
2016-02-01
Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.
An immune-inspired swarm aggregation algorithm for self-healing swarm robotic systems.
Timmis, J; Ismail, A R; Bjerknes, J D; Winfield, A F T
2016-08-01
Swarm robotics is concerned with the decentralised coordination of multiple robots having only limited communication and interaction abilities. Although fault tolerance and robustness to individual robot failures have often been used to justify the use of swarm robotic systems, recent studies have shown that swarm robotic systems are susceptible to certain types of failure. In this paper we propose an approach to self-healing swarm robotic systems and take inspiration from the process of granuloma formation, a process of containment and repair found in the immune system. We use a case study of a swarm performing team work where previous works have demonstrated that partially failed robots have the most detrimental effect on overall swarm behaviour. We have developed an immune inspired approach that permits the recovery from certain failure modes during operation of the swarm, overcoming issues that effect swarm behaviour associated with partially failed robots. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Instrumented Compliant Wrist with Proximity and Contact Sensing for Close Robot Interaction Control.
Laferrière, Pascal; Payeur, Pierre
2017-06-14
Compliance has been exploited in various forms in robotic systems to allow rigid mechanisms to come into contact with fragile objects, or with complex shapes that cannot be accurately modeled. Force feedback control has been the classical approach for providing compliance in robotic systems. However, by integrating other forms of instrumentation with compliance into a single device, it is possible to extend close monitoring of nearby objects before and after contact occurs. As a result, safer and smoother robot control can be achieved both while approaching and while touching surfaces. This paper presents the design and extensive experimental evaluation of a versatile, lightweight, and low-cost instrumented compliant wrist mechanism which can be mounted on any rigid robotic manipulator in order to introduce a layer of compliance while providing the controller with extra sensing signals during close interaction with an object's surface. Arrays of embedded range sensors provide real-time measurements on the position and orientation of surfaces, either located in proximity or in contact with the robot's end-effector, which permits close guidance of its operation. Calibration procedures are formulated to overcome inter-sensor variability and achieve the highest available resolution. A versatile solution is created by embedding all signal processing, while wireless transmission connects the device to any industrial robot's controller to support path control. Experimental work demonstrates the device's physical compliance as well as the stability and accuracy of the device outputs. Primary applications of the proposed instrumented compliant wrist include smooth surface following in manufacturing, inspection, and safe human-robot interaction.
Off-line programming motion and process commands for robotic welding of Space Shuttle main engines
NASA Technical Reports Server (NTRS)
Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.
1987-01-01
The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.
Fast-moving soft electronic fish.
Li, Tiefeng; Li, Guorui; Liang, Yiming; Cheng, Tingyu; Dai, Jing; Yang, Xuxu; Liu, Bangyuan; Zeng, Zedong; Huang, Zhilong; Luo, Yingwu; Xie, Tao; Yang, Wei
2017-04-01
Soft robots driven by stimuli-responsive materials have unique advantages over conventional rigid robots, especially in their high adaptability for field exploration and seamless interaction with humans. The grand challenge lies in achieving self-powered soft robots with high mobility, environmental tolerance, and long endurance. We are able to advance a soft electronic fish with a fully integrated onboard system for power and remote control. Without any motor, the fish is driven solely by a soft electroactive structure made of dielectric elastomer and ionically conductive hydrogel. The electronic fish can swim at a speed of 6.4 cm/s (0.69 body length per second), which is much faster than previously reported untethered soft robotic fish driven by soft responsive materials. The fish shows consistent performance in a wide temperature range and permits stealth sailing due to its nearly transparent nature. Furthermore, the fish is robust, as it uses the surrounding water as the electric ground and can operate for 3 hours with one single charge. The design principle can be potentially extended to a variety of flexible devices and soft robots.
Fast-moving soft electronic fish
Li, Tiefeng; Li, Guorui; Liang, Yiming; Cheng, Tingyu; Dai, Jing; Yang, Xuxu; Liu, Bangyuan; Zeng, Zedong; Huang, Zhilong; Luo, Yingwu; Xie, Tao; Yang, Wei
2017-01-01
Soft robots driven by stimuli-responsive materials have unique advantages over conventional rigid robots, especially in their high adaptability for field exploration and seamless interaction with humans. The grand challenge lies in achieving self-powered soft robots with high mobility, environmental tolerance, and long endurance. We are able to advance a soft electronic fish with a fully integrated onboard system for power and remote control. Without any motor, the fish is driven solely by a soft electroactive structure made of dielectric elastomer and ionically conductive hydrogel. The electronic fish can swim at a speed of 6.4 cm/s (0.69 body length per second), which is much faster than previously reported untethered soft robotic fish driven by soft responsive materials. The fish shows consistent performance in a wide temperature range and permits stealth sailing due to its nearly transparent nature. Furthermore, the fish is robust, as it uses the surrounding water as the electric ground and can operate for 3 hours with one single charge. The design principle can be potentially extended to a variety of flexible devices and soft robots. PMID:28435879
Magneto-inductive skin sensor for robot collision avoidance: A new development
NASA Technical Reports Server (NTRS)
Chauhan, D. S.; Dehoff, Paul H.
1989-01-01
Safety is a primary concern for robots operating in space. The tri-mode sensor addresses that concern by employing a collision avoidance/management skin around the robot arms. This rf-based skin sensor is at present a dual mode (proximity and tactile). The third mode, pyroelectric, will complement the other two. The proximity mode permits the robot to sense an intruding object, to range the object, and to detect the edges of the object. The tactile mode permits the robot to sense when it has contacted an object, where on the arm it has made contact, and provides a three-dimensional image of the shape of the contact impression. The pyroelectric mode will be added to permit the robot arm to detect the proximity of a hot object and to add sensing redundancy to the two other modes. The rf-modes of the sensing skin are presented. These modes employ a highly efficient magnetic material (amorphous metal) in a sensing technique. This results in a flexible sensor array which uses a primarily inductive configuration to permit both capacitive and magnetoinductive sensing of object; thus optimizing performance in both proximity and tactile modes with the same sensing skin. The fundamental operating principles, design particulars, and theoretical models are provided to aid in the description and understanding of this sensor. Test results are also given.
Towards multi-platform software architecture for Collaborative Teleoperation
NASA Astrophysics Data System (ADS)
Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik
2009-03-01
Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
A Demonstrator Intelligent Scheduler For Sensor-Based Robots
NASA Astrophysics Data System (ADS)
Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.
1987-10-01
The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
Material handling robot system for flow-through storage applications
NASA Astrophysics Data System (ADS)
Dill, James F.; Candiloro, Brian; Downer, James; Wiesman, Richard; Fallin, Larry; Smith, Ron
1999-01-01
This paper describes the design, development and planned implementation of a system of mobile robots for use in flow through storage applications. The robots are being designed with on-board embedded controls so that they can perform their tasks as semi-autonomous workers distributed within a centrally controlled network. On the storage input side, boxes will be identified by bar-codes and placed into preassigned flow through bins. On the shipping side, orders will be forwarded to the robots from a central order processing station and boxes will be picked from designated storage bins following proper sequencing to permit direct loading into trucks for shipping. Because of the need to maintain high system availability, a distributed control strategy has been selected. When completed, the system will permit robots to be dynamically reassigned responsibilities if an individual unit fails. On-board health diagnostics and condition monitoring will be used to maintain high reliability of the units.
Socially intelligent robots: dimensions of human-robot interaction.
Dautenhahn, Kerstin
2007-04-29
Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.
The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving
ERIC Educational Resources Information Center
Okita, Sandra
2015-01-01
Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
Sensing sociality in dogs: what may make an interactive robot social?
Lakatos, Gabriella; Janiak, Mariusz; Malek, Lukasz; Muszynski, Robert; Konok, Veronika; Tchon, Krzysztof; Miklósi, A
2014-03-01
This study investigated whether dogs would engage in social interactions with an unfamiliar robot, utilize the communicative signals it provides and to examine whether the level of sociality shown by the robot affects the dogs' performance. We hypothesized that dogs would react to the communicative signals of a robot more successfully if the robot showed interactive social behaviour in general (towards both humans and dogs) than if it behaved in a machinelike, asocial way. The experiment consisted of an interactive phase followed by a pointing session, both with a human and a robotic experimenter. In the interaction phase, dogs witnessed a 6-min interaction episode between the owner and a human experimenter and another 6-min interaction episode between the owner and the robot. Each interaction episode was followed by the pointing phase in which the human/robot experimenter indicated the location of hidden food by using pointing gestures (two-way choice test). The results showed that in the interaction phase, the dogs' behaviour towards the robot was affected by the differential exposure. Dogs spent more time staying near the robot experimenter as compared to the human experimenter, with this difference being even more pronounced when the robot behaved socially. Similarly, dogs spent more time gazing at the head of the robot experimenter when the situation was social. Dogs achieved a significantly lower level of performance (finding the hidden food) with the pointing robot than with the pointing human; however, separate analysis of the robot sessions suggested that gestures of the socially behaving robot were easier for the dogs to comprehend than gestures of the asocially behaving robot. Thus, the level of sociality shown by the robot was not enough to elicit the same set of social behaviours from the dogs as was possible with humans, although sociality had a positive effect on dog-robot interactions.
Human-Robot Teams for Unknown and Uncertain Environments
NASA Technical Reports Server (NTRS)
Fong, Terry
2015-01-01
Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.
Alac, Morana; Movellan, Javier; Tanaka, Fumihide
2011-12-01
Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.
Wilson, James C; Kesler, Mitch; Pelegrin, Sara-Lynn E; Kalvi, LeAnna; Gruber, Aaron; Steenland, Hendrik W
2015-09-30
The physical distance between predator and prey is a primary determinant of behavior, yet few paradigms exist to study this reliably in rodents. The utility of a robotically controlled laser for use in a predator-prey-like (PPL) paradigm was explored for use in rats. This involved the construction of a robotic two-dimensional gimbal to dynamically position a laser beam in a behavioral test chamber. Custom software was used to control the trajectory and final laser position in response to user input on a console. The software also detected the location of the laser beam and the rodent continuously so that the dynamics of the distance between them could be analyzed. When the animal or laser beam came within a fixed distance the animal would either be rewarded with electrical brain stimulation or shocked subcutaneously. Animals that received rewarding electrical brain stimulation could learn to chase the laser beam, while animals that received aversive subcutaneous shock learned to actively avoid the laser beam in the PPL paradigm. Mathematical computations are presented which describe the dynamic interaction of the laser and rodent. The robotic laser offers a neutral stimulus to train rodents in an open field and is the first device to be versatile enough to assess distance between predator and prey in real time. With ongoing behavioral testing this tool will permit the neurobiological investigation of predator/prey-like relationships in rodents, and may have future implications for prosthetic limb development through brain-machine interfaces. Copyright © 2015 Elsevier B.V. All rights reserved.
Learning inverse kinematics: reduced sampling through decomposition into virtual robots.
de Angulo, Vicente Ruiz; Torras, Carme
2008-12-01
We propose a technique to speedup the learning of the inverse kinematics of a robot manipulator by decomposing it into two or more virtual robot arms. Unlike previous decomposition approaches, this one does not place any requirement on the robot architecture, and thus, it is completely general. Parametrized self-organizing maps are particularly adequate for this type of learning, and permit comparing results directly obtained and through the decomposition. Experimentation shows that time reductions of up to two orders of magnitude are easily attained.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
Towards multi-platform software architecture for Collaborative Teleoperation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domingues, Christophe; Otmane, Samir; Davesne, Frederic
2009-03-05
Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robotmore » simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.« less
NASA Astrophysics Data System (ADS)
Ayres, R.; Miller, S.
1982-06-01
The characteristics, applications, and operational capabilities of currently available robots are examined. Designed to function at tasks of a repetitive, hazardous, or uncreative nature, robot appendages are controlled by microprocessors which permit some simple decision-making on-the-job, and have served for sample gathering on the Mars Viking lander. Critical developmental areas concern active sensors at the robot grappler-object interface, where sufficient data must be gathered for the central processor to which the robot is attached to conclude the state of completion and suitability of the workpiece. Although present robots must be programmed through every step of a particular industrial process, thus limiting each robot to specialized tasks, the potential for closed cells of batch-processing robot-run units is noted to be close to realization. Finally, consideration is given to methods for retraining the human workforce that robots replace
Liang, Yuhua Jake; Lee, Seungcheol Austin
2016-09-01
Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.
Abubshait, Abdulaziz; Wiese, Eva
2017-01-01
Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.
Emotion attribution to a non-humanoid robot in different social situations.
Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám
2014-01-01
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour ("happiness" and "fear"), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.
Emotion Attribution to a Non-Humanoid Robot in Different Social Situations
Lakatos, Gabriella; Gácsi, Márta; Konok, Veronika; Brúder, Ildikó; Bereczky, Boróka; Korondi, Péter; Miklósi, Ádám
2014-01-01
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot. PMID:25551218
Control of a Serpentine Robot for Inspection Tasks
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1994-01-01
This paper presents a simple and robust kinematic control scheme for the JPL serpentine robot system. The proposed strategy is developed using the dampened-least-squares/configuration control methodology, and permits the considerable dexterity of the JPL serpentine robot to be effectively utilized for maneuvering in the congested and uncertain workspaces often encountered in inspection tasks. Computer simulation results are given for the 20 degree-of-freedom (DOF) manipulator system obtained by mounting the twelve DOF serpentine robot at the end-effector of an eight DOF Robotics Research arm/lathe-bed system. These simulations demonstrate that the proposed approach provides an effective method of controlling this complex system.
Master-slave robotic system for needle indentation and insertion.
Shin, Jaehyun; Zhong, Yongmin; Gu, Chengfan
2017-12-01
Bilateral control of a master-slave robotic system is a challenging issue in robotic-assisted minimally invasive surgery. It requires the knowledge on contact interaction between a surgical (slave) robot and soft tissues. This paper presents a master-slave robotic system for needle indentation and insertion. This master-slave robotic system is able to characterize the contact interaction between the robotic needle and soft tissues. A bilateral controller is implemented using a linear motor for robotic needle indentation and insertion. A new nonlinear state observer is developed to online monitor the contact interaction with soft tissues. Experimental results demonstrate the efficacy of the proposed master-slave robotic system for robotic needle indentation and needle insertion.
Smooth leader or sharp follower? Playing the mirror game with a robot.
Kashi, Shir; Levy-Tzedek, Shelly
2018-01-01
The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.
The Human-Robot Interaction Operating System
NASA Technical Reports Server (NTRS)
Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda
2006-01-01
In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.
2016-05-01
research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266
3D vision system for intelligent milking robot automation
NASA Astrophysics Data System (ADS)
Akhloufi, M. A.
2013-12-01
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.
Xu, Tian Linger; Zhang, Hui; Yu, Chen
2016-05-01
We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.
Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin
2013-01-01
As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.
Social robots as embedded reinforcers of social behavior in children with autism.
Kim, Elizabeth S; Berkovits, Lauren D; Bernier, Emily P; Leyzberg, Dan; Shic, Frederick; Paul, Rhea; Scassellati, Brian
2013-05-01
In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.
ERIC Educational Resources Information Center
Dunst, Carl J.; Prior, Jeremy; Hamby, Deborah W.; Trivette, Carol M.
2013-01-01
Findings from two studies of 11 young children with autism, Down syndrome, or attention deficit disorders investigating the effects of Popchilla, a socially interactive robot, on the children's affective behavior are reported. The children were observed under two conditions, child-toy interactions and child-robot interactions, and ratings of child…
Syrdal, Dag Sverre; Dautenhahn, Kerstin; Koay, Kheng Lee; Ho, Wan Ching
2014-01-01
This article describes the prototyping of human-robot interactions in the University of Hertfordshire (UH) Robot House. Twelve participants took part in a long-term study in which they interacted with robots in the UH Robot House once a week for a period of 10 weeks. A prototyping method using the narrative framing technique allowed participants to engage with the robots in episodic interactions that were framed using narrative to convey the impression of a continuous long-term interaction. The goal was to examine how participants responded to the scenarios and the robots as well as specific robot behaviours, such as agent migration and expressive behaviours. Evaluation of the robots and the scenarios were elicited using several measures, including the standardised System Usability Scale, an ad hoc Scenario Acceptance Scale, as well as single-item Likert scales, open-ended questionnaire items and a debriefing interview. Results suggest that participants felt that the use of this prototyping technique allowed them insight into the use of the robot, and that they accepted the use of the robot within the scenario.
Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot
NASA Astrophysics Data System (ADS)
Kelley, Leah; Ostovari, Saam; Burmeister, Aaron B.; Talke, Kurt A.; Pezeshkian, Narek; Rahimi, Amin; Hart, Abraham B.; Nguyen, Hoa G.
2015-05-01
A novel, multi-segmented magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative motion connect front and rear driving modules, so the robot can twist and turn. This permits its navigation over surface discontinuities while maintaining its adhesion to the hull. During operation, the magnetic crawler receives forward and turning velocity commands from either a tele-operator or high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the development of a simple, low-level, leader-follower controller that permits the rear module to follow the front module. The kinematics and dynamics of the two-module magnetic crawler robot are described. The robot's geometry, kinematic constraints and the user-commanded velocities are used to calculate the desired instantaneous center of rotation and the corresponding central-linkage angle necessary for the back module to follow the front module when turning. The commands to the rear driving motors are determined by applying PID control on the error between the desired and measured linkage angle position. The controller is designed and tested using Matlab Simulink. It is then implemented and tested on an early two-module magnetic crawler prototype robot. Results of the simulations and experimental validation of the controller design are presented.
Wang, Rosalie H; Sudhama, Aishwarya; Begum, Momotaz; Huq, Rajibul; Mihailidis, Alex
2017-01-01
Robots have the potential to both enable older adults with dementia to perform daily activities with greater independence, and provide support to caregivers. This study explored perspectives of older adults with Alzheimer's disease (AD) and their caregivers on robots that provide stepwise prompting to complete activities in the home. Ten dyads participated: Older adults with mild-to-moderate AD and difficulty completing activity steps, and their family caregivers. Older adults were prompted by a tele-operated robot to wash their hands in the bathroom and make a cup of tea in the kitchen. Caregivers observed interactions. Semi-structured interviews were conducted individually. Transcribed interviews were thematically analyzed. Three themes summarized responses to robot interactions: contemplating a future with assistive robots, considering opportunities with assistive robots, and reflecting on implications for social relationships. Older adults expressed opportunities for robots to help in daily activities, were open to the idea of robotic assistance, but did not want a robot. Caregivers identified numerous opportunities and were more open to robots. Several wanted a robot, if available. Positive consequences of robots in caregiving scenarios could include decreased frustration, stress, and relationship strain, and increased social interaction via the robot. A negative consequence could be decreased interaction with caregivers. Few studies have investigated in-depth perspectives of older adults with dementia and their caregivers following direct interaction with an assistive prompting robot. To fulfill the potential of robots, continued dialogue between users and developers, and consideration of robot design and caregiving relationship factors are necessary.
Smooth leader or sharp follower? Playing the mirror game with a robot
Kashi, Shir; Levy-Tzedek, Shelly
2017-01-01
Background: The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. Objective: We set out to test people’s preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Methods: Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. Results: The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. Conclusion: The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions. PMID:29036853
Sartorato, Felippe; Przybylowski, Leon; Sarko, Diana K
2017-07-01
For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the Utilization of Social Animals as a Model for Social Robotics
Miklósi, Ádám; Gácsi, Márta
2012-01-01
Social robotics is a thriving field in building artificial agents. The possibility to construct agents that can engage in meaningful social interaction with humans presents new challenges for engineers. In general, social robotics has been inspired primarily by psychologists with the aim of building human-like robots. Only a small subcategory of “companion robots” (also referred to as robotic pets) was built to mimic animals. In this opinion essay we argue that all social robots should be seen as companions and more conceptual emphasis should be put on the inter-specific interaction between humans and social robots. This view is underlined by the means of an ethological analysis and critical evaluation of present day companion robots. We suggest that human–animal interaction provides a rich source of knowledge for designing social robots that are able to interact with humans under a wide range of conditions. PMID:22457658
Can Robots and Humans Get Along?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2007-06-01
Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.
A Novel Passive Robotic Tool Interface
NASA Astrophysics Data System (ADS)
Roberts, Paul
2013-09-01
The increased capability of space robotics has seen their uses increase from simple sample gathering and mechanical adjuncts to humans, to sophisticated multi- purpose investigative and maintenance tools that substitute for humans for many external space tasks. As with all space missions, reducing mass and system complexity is critical. A key component of robotic systems mass and complexity is the number of motors and actuators needed. MDA has developed a passive tool interface that, like a household power drill, permits a single tool actuator to be interfaced with many Tool Tips without requiring additional actuators to manage the changing and storage of these tools. MDA's Multifunction Tool interface permits a wide range of Tool Tips to be designed to a single interface that can be pre-qualified to torque and strength limits such that additional Tool Tips can be added to a mission's "tool kit" simply and quickly.
Analysis of human emotion in human-robot interaction
NASA Astrophysics Data System (ADS)
Blar, Noraidah; Jafar, Fairul Azni; Abdullah, Nurhidayu; Muhammad, Mohd Nazrin; Kassim, Anuar Muhamed
2015-05-01
There is vast application of robots in human's works such as in industry, hospital, etc. Therefore, it is believed that human and robot can have a good collaboration to achieve an optimum result of work. The objectives of this project is to analyze human-robot collaboration and to understand humans feeling (kansei factors) when dealing with robot that robot should adapt to understand the humans' feeling. Researches currently are exploring in the area of human-robot interaction with the intention to reduce problems that subsist in today's civilization. Study had found that to make a good interaction between human and robot, first it is need to understand the abilities of each. Kansei Engineering in robotic was used to undergo the project. The project experiments were held by distributing questionnaire to students and technician. After that, the questionnaire results were analyzed by using SPSS analysis. Results from the analysis shown that there are five feelings which significant to the human in the human-robot interaction; anxious, fatigue, relaxed, peaceful, and impressed.
Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed
2014-01-01
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.
Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed
2014-01-01
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554
Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.
Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J
2017-07-01
Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.
Embodied cognition for autonomous interactive robots.
Hoffman, Guy
2012-10-01
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.
Do infants perceive the social robot Keepon as a communicative partner?
Peca, Andreea; Simut, Ramona; Cao, Hoang-Long; Vanderborght, Bram
2016-02-01
This study investigates if infants perceive an unfamiliar agent, such as the robot Keepon, as a social agent after observing an interaction between the robot and a human adult. 23 infants, aged 9-17 month, were exposed, in a first phase, to either a contingent interaction between the active robot and an active human adult, or to an interaction between an active human adult and the non-active robot, followed by a second phase, in which infants were offered the opportunity to initiate a turn-taking interaction with Keepon. The measured variables were: (1) the number of social initiations the infant directed toward the robot, and (2) the number of anticipatory orientations of attention to the agent that follows in the conversation. The results indicate a significant higher level of initiations in the interactive robot condition compared to the non-active robot condition, while the difference between the frequencies of anticipations of turn-taking behaviors was not significant. Copyright © 2015 Elsevier Inc. All rights reserved.
Modeling Leadership Styles in Human-Robot Team Dynamics
NASA Technical Reports Server (NTRS)
Cruz, Gerardo E.
2005-01-01
The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.
Trust and Trustworthiness in Human-Robot Interaction: A Formal Conceptualization
2016-05-11
AFRL-AFOSR-VA-TR-2016-0198 Trust and Trustworthiness in Human- Robot Interaction: A formal conceptualization Alan Wagner GEORGIA TECH APPLIED RESEARCH...27/2013-03/31/2016 4. TITLE AND SUBTITLE Trust and Trustworthiness in Human- Robot Interaction: A formal conceptualization 5a. CONTRACT NUMBER 5b...evaluated algorithms for characterizing trust during interactions between a robot and a human and employed strategies for repairing trust during emergency
NASA Astrophysics Data System (ADS)
Soler, Luc; Marescaux, Jacques
2006-04-01
Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Nicewarner, Keith
2006-01-01
We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.
Interactive robots in experimental biology.
Krause, Jens; Winfield, Alan F T; Deneubourg, Jean-Louis
2011-07-01
Interactive robots have the potential to revolutionise the study of social behaviour because they provide several methodological advances. In interactions with live animals, the behaviour of robots can be standardised, morphology and behaviour can be decoupled (so that different morphologies and behavioural strategies can be combined), behaviour can be manipulated in complex interaction sequences and models of behaviour can be embodied by the robot and thereby be tested. Furthermore, robots can be used as demonstrators in experiments on social learning. As we discuss here, the opportunities that robots create for new experimental approaches have far-reaching consequences for research in fields such as mate choice, cooperation, social learning, personality studies and collective behaviour. Copyright © 2011 Elsevier Ltd. All rights reserved.
Perspectives on mobile robots as tools for child development and pediatric rehabilitation.
Michaud, François; Salter, Tamie; Duquette, Audrey; Laplante, Jean-François
2007-01-01
Mobile robots (i.e., robots capable of translational movements) can be designed to become interesting tools for child development studies and pediatric rehabilitation. In this article, the authors present two of their projects that involve mobile robots interacting with children: One is a spherical robot deployed in a variety of contexts, and the other is mobile robots used as pedagogical tools for children with pervasive developmental disorders. Locomotion capability appears to be key in creating meaningful and sustained interactions with children: Intentional and purposeful motion is an implicit appealing factor in obtaining children's attention and engaging them in interaction and learning. Both of these projects started with robotic objectives but are revealed to be rich sources of interdisciplinary collaborations in the field of assistive technology. This article presents perspectives on how mobile robots can be designed to address the requirements of child-robot interactions and studies. The authors also argue that mobile robot technology can be a useful tool in rehabilitation engineering, reaching its full potential through strong collaborations between roboticists and pediatric specialists.
Learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.
2003-10-01
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior.
Ficocelli, Maurizio; Terao, Junichi; Nejat, Goldie
2016-12-01
The objective of a socially assistive robot is to create a close and effective interaction with a human user for the purpose of giving assistance. In particular, the social interaction, guidance, and support that a socially assistive robot can provide a person can be very beneficial to patient-centered care. However, there are a number of research issues that need to be addressed in order to design such robots. This paper focuses on developing effective emotion-based assistive behavior for a socially assistive robot intended for natural human-robot interaction (HRI) scenarios with explicit social and assistive task functionalities. In particular, in this paper, a unique emotional behavior module is presented and implemented in a learning-based control architecture for assistive HRI. The module is utilized to determine the appropriate emotions of the robot to display, as motivated by the well-being of the person, during assistive task-driven interactions in order to elicit suitable actions from users to accomplish a given person-centered assistive task. A novel online updating technique is used in order to allow the emotional model to adapt to new people and scenarios. Experiments presented show the effectiveness of utilizing robotic emotional assistive behavior during HRI scenarios.
Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J; Wrede, Britta
2014-01-01
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.
Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J.; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J.; Wrede, Britta
2014-01-01
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction. PMID:24646510
Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin
2013-01-01
As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434
Regulation and Entrainment in Human-Robot Interaction
2000-01-01
applications for domestic, health care related, or entertainment based robots motivate the development of robots that can socially interact with, learn...picture shows WE-3RII, an expressive face robot developed at Waseda University. The middle right picture shows Robita, an upper-torso robot also... developed at Waseda University to track speaking turns. The far right picture shows our expressive robot, Kismet, developed at MIT. The two leftmost photos
Fundamentals of soft robot locomotion
2017-01-01
Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human–robot interaction and locomotion. Although field applications have emerged for soft manipulation and human–robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. PMID:28539483
Fundamentals of soft robot locomotion.
Calisti, M; Picardi, G; Laschi, C
2017-05-01
Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).
HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.
Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael
2017-07-01
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modelling cooperation of industrial robots as multi-agent systems
NASA Astrophysics Data System (ADS)
Hryniewicz, P.; Banas, W.; Foit, K.; Gwiazda, A.; Sekala, A.
2017-08-01
Nowadays, more and more often in a cell is more than one robot, there is also a dual arm robots, because of this cooperation of two robots in the same space becomes more and more important. Programming robotic cell consisting of two or more robots are currently performed separately for each element of the robot and the cell. It is performed only synchronization programs, but no robot movements. In such situations often placed industrial robots so they do not have common space so the robots are operated separately. When industrial robots are a common space this space can occupy only one robot the other one must be outside the common space. It is very difficult to find applications where two robots are in the same workspace. It was tested but one robot did not do of movement when moving the second and waited for permission to move from the second when it sent a permit - stop the move. Such programs are very difficult and require a lot of experience from the programmer and must be tested separately at the beginning and then very slowly under control. Ideally, the operator takes care of exactly one robot during the test and it is very important to take special care.
Kinematic Optimization of Robot Trajectories for Thermal Spray Coating Application
NASA Astrophysics Data System (ADS)
Deng, Sihao; Liang, Hong; Cai, Zhenhua; Liao, Hanlin; Montavon, Ghislain
2014-12-01
Industrial robots are widely used in the field of thermal spray nowadays. Due to their characteristics of high-accuracy and programmable flexibility, spraying on complex geometrical workpieces can be realized in the equipped spray room. However, in some cases, the robots cannot guarantee the process parameters defined by the robot movement, such as the scanning trajectory, spray angle, relative speed between the torch and the substrate, etc., which have distinct influences on heat and mass transfer during the generation of any thermally sprayed coatings. In this study, an investigation on the robot kinematics was proposed to find the rules of motion in a common case. The results showed that the motion behavior of each axis of robot permits to identify the motion problems in the trajectory. This approach allows to optimize the robot trajectory generation in a limited working envelop. It also minimizes the influence of robot performance to achieve a more constant relative scanning speed which is represented as a key parameter in thermal spraying.
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
Children’s Imaginaries of Human-Robot Interaction in Healthcare
2018-01-01
This paper analyzes children’s imaginaries of Human-Robots Interaction (HRI) in the context of social robots in healthcare, and it explores ethical and social issues when designing a social robot for a children’s hospital. Based on approaches that emphasize the reciprocal relationship between society and technology, the analytical force of imaginaries lies in their capacity to be embedded in practices and interactions as well as to affect the construction and applications of surrounding technologies. The study is based on a participatory process carried out with six-year-old children for the design of a robot. Imaginaries of HRI are analyzed from a care-centered approach focusing on children’s values and practices as related to their representation of care. The conceptualization of HRI as an assemblage of interactions, the prospective bidirectional care relationships with robots, and the engagement with the robot as an entity of multiple potential robots are the major findings of this study. The study shows the potential of studying imaginaries of HRI, and it concludes that their integration in the final design of robots is a way of including ethical values in it. PMID:29757221
Children's Imaginaries of Human-Robot Interaction in Healthcare.
Vallès-Peris, Núria; Angulo, Cecilio; Domènech, Miquel
2018-05-12
This paper analyzes children’s imaginaries of Human-Robots Interaction (HRI) in the context of social robots in healthcare, and it explores ethical and social issues when designing a social robot for a children’s hospital. Based on approaches that emphasize the reciprocal relationship between society and technology, the analytical force of imaginaries lies in their capacity to be embedded in practices and interactions as well as to affect the construction and applications of surrounding technologies. The study is based on a participatory process carried out with six-year-old children for the design of a robot. Imaginaries of HRI are analyzed from a care-centered approach focusing on children’s values and practices as related to their representation of care. The conceptualization of HRI as an assemblage of interactions, the prospective bidirectional care relationships with robots, and the engagement with the robot as an entity of multiple potential robots are the major findings of this study. The study shows the potential of studying imaginaries of HRI, and it concludes that their integration in the final design of robots is a way of including ethical values in it.
Toward a framework for levels of robot autonomy in human-robot interaction.
Beer, Jenay M; Fisk, Arthur D; Rogers, Wendy A
2014-07-01
A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence - and are influenced by - robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot's autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA.
Jiang, Zhongliang; Sun, Yu; Gao, Peng; Hu, Ying; Zhang, Jianwei
2016-01-01
Robots play more important roles in daily life and bring us a lot of convenience. But when people work with robots, there remain some significant differences in human-human interactions and human-robot interaction. It is our goal to make robots look even more human-like. We design a controller which can sense the force acting on any point of a robot and ensure the robot can move according to the force. First, a spring-mass-dashpot system was used to describe the physical model, and the second-order system is the kernel of the controller. Then, we can establish the state space equations of the system. In addition, the particle swarm optimization algorithm had been used to obtain the system parameters. In order to test the stability of system, the root-locus diagram had been shown in the paper. Ultimately, some experiments had been carried out on the robotic spinal surgery system, which is developed by our team, and the result shows that the new controller performs better during human-robot interaction.
Interactive autonomy and robotic skills
NASA Technical Reports Server (NTRS)
Kellner, A.; Maediger, B.
1994-01-01
Current concepts of robot-supported operations for space laboratories (payload servicing, inspection, repair, and ORU exchange) are mainly based on the concept of 'interactive autonomy' which implies autonomous behavior of the robot according to predefined timelines, predefined sequences of elementary robot operations and within predefined world models supplying geometrical and other information for parameter instantiation on the one hand, and the ability to override and change the predefined course of activities by human intervention on the other hand. Although in principle a very powerful and useful concept, in practice the confinement of the robot to the abstract world models and predefined activities appears to reduce the robot's stability within real world uncertainties and its applicability to non-predefined parts of the world, calling for frequent corrective interaction by the operator, which in itself may be tedious and time-consuming. Methods are presented to improve this situation by incorporating 'robotic skills' into the concept of interactive autonomy.
Interactive Exploration Robots: Human-Robotic Collaboration and Interactions
NASA Technical Reports Server (NTRS)
Fong, Terry
2017-01-01
For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.
KALI - An environment for the programming and control of cooperative manipulators
NASA Technical Reports Server (NTRS)
Hayward, Vincent; Hayati, Samad
1988-01-01
A design description is given of a controller for cooperative robots. The background and motivation for multiple arm control are discussed. A set of programming primitives which permit a programmer to specify cooperative tasks are described. Motion primitives specify asynchronous motions, master/slave motions, and cooperative motions. In the context of cooperative robots, trajectory generation issues are discussed and the authors' implementation briefly described. The relations between programming and control in the case of multiple robots are examined. The allocation of various tasks among a multiprocessor computer is described.
NASA Technical Reports Server (NTRS)
Purves, Lloyd R. (Inventor)
1992-01-01
A robot serviced space facility includes multiple modules which are identical in physical structure, but selectively differing in function. and purpose. Each module includes multiple like attachment points which are identically placed on each module so as to permit interconnection with immediately adjacent modules. Connection is made through like outwardly extending flange assemblies having identical male and female configurations for interconnecting to and locking to a complementary side of another flange. Multiple rows of interconnected modules permit force, fluid, data and power transfer to be accomplished by redundant circuit paths. Redundant modules of critical subsystems are included. Redundancy of modules and of interconnections results in a space complex with any module being removable upon demand, either for module replacement or facility reconfiguration. without eliminating any vital functions of the complex. Module replacement and facility assembly or reconfiguration are accomplished by a computer controlled articulated walker type robotic manipulator arm assembly having two identical end-effectors in the form of male configurations which are identical to those on module flanges and which interconnect to female configurations on other flanges. The robotic arm assembly moves along a connected set or modules by successively disconnecting, moving and reconnecting alternate ends of itself to a succession of flanges in a walking type maneuver. To transport a module, the robot keeps the transported module attached to one of its end-effectors and uses another flange male configuration of the attached module as a substitute end-effector during walking.
[A gearing mechanism with 4 degrees of freedom for robotic applications in medicine].
Pott, P; Weiser, P; Scharf, H P; Schwarz, M
2004-06-01
Applications in robot-aided surgery are currently based on modifications of manipulators used in industrial manufacturing processes. In this paper we describe novel rotatory kinematics for a manipulator, specially developed for deployment in robot-aided surgery. The construction of the gearing mechanism used for the positioning and orientation of a linkage point is described. Forward and inverse kinematics were calculated, and a constructive solution proposed. The gearing mechanism is based on two disk systems, each of which consists of two opposing rotatable discs. The construction was designed in such a way that the linkage point can be positioned freely anywhere within the mechanism's range of motion. The kinematics thus permits an x-y-positioning via rotating movements only. The spatial arrangement of two of such disc systems permits movements in four degrees of freedom (DOF). The construction is compact, but can be further miniaturized, is flexible and manufacturing costs are low. On the basis of this mechanical concept a new, small automated manipulator for surgical application will be developed.
Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-11-24
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-01-01
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051
Human guidance of mobile robots in complex 3D environments using smart glasses
NASA Astrophysics Data System (ADS)
Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel
2016-05-01
In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.
NASA Astrophysics Data System (ADS)
Butail, Sachit; Polverino, Giovanni; Phamduy, Paul; Del Sette, Fausto; Porfiri, Maurizio
2014-03-01
We explore fish-robot interactions in a comprehensive set of experiments designed to highlight the effects of speed and configuration of bioinspired robots on live zebrafish. The robot design and movement is inspired by salient features of attraction in zebrafish and includes enhanced coloration, aspect ratio of a fertile female, and carangiform/subcarangiformlocomotion. The robots are autonomously controlled to swim in circular trajectories in the presence of live fish. Our results indicate that robot configuration significantly affects both the fish distance to the robots and the time spent near them.
NASA Astrophysics Data System (ADS)
Bharatharaj, Jaishankar; Huang, Loulin; Al-Jumaily, Ahmed; Elara, Mohan Rajesh; Krägeloh, Chris
2017-09-01
Therapeutic pet robots designed to help humans with various medical conditions could play a vital role in physiological, psychological and social-interaction interventions for children with autism spectrum disorder (ASD). In this paper, we report our findings from a robot-assisted therapeutic study conducted over seven weeks to investigate the changes in stress levels of children with ASD. For this study, we used the parrot-inspired therapeutic robot, KiliRo, we developed and investigated urinary and salivary samples of participating children to report changes in stress levels before and after interacting with the robot. This is a pioneering human-robot interaction study to investigate the effects of robot-assisted therapy using salivary samples. The results show that the bio-inspired robot-assisted therapy can significantly help reduce the stress levels of children with ASD.
Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.
Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi
2018-03-26
For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.
Avoiding Local Optima with Interactive Evolutionary Robotics
2012-07-09
the top of a flight of stairs selects for climbing ; suspending the robot and the target object above the ground and creating rungs between the two will...REPORT Avoiding Local Optimawith Interactive Evolutionary Robotics 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The main bottleneck in evolutionary... robotics has traditionally been the time required to evolve robot controllers. However with the continued acceleration in computational resources, the
Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed.
Reuten, Anne; van Dam, Maureen; Naber, Marnix
2018-01-01
Physiological responses during human-robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.
Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed
Reuten, Anne; van Dam, Maureen; Naber, Marnix
2018-01-01
Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses. PMID:29875722
A Mobile, Map-Based Tasking Interface for Human-Robot Interaction
2010-12-01
A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2
Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A
2014-05-01
Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.
A Preliminary Study of Peer-to-Peer Human-Robot Interaction
NASA Technical Reports Server (NTRS)
Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert
2006-01-01
The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.
Robot therapy: a new approach for mental healthcare of the elderly - a mini-review.
Shibata, Takanori; Wada, Kazuyoshi
2011-01-01
Mental healthcare of elderly people is a common problem in advanced countries. Recently, high technology has developed robots for use not only in factories but also for our living environment. In particular, human-interactive robots for psychological enrichment, which provide services by interacting with humans while stimulating their minds, are rapidly spreading. Such robots not only simply entertain but also render assistance, guide, provide therapy, educate, enable communication, and so on. Robot therapy, which uses robots as a substitution for animals in animal-assisted therapy and activity, is a new application of robots and is attracting the attention of many researchers and psychologists. The seal robot named Paro was developed especially for robot therapy and was used at hospitals and facilities for elderly people in several countries. Recent research has revealed that robot therapy has the same effects on people as animal therapy. In addition, it is being recognized as a new method of mental healthcare for elderly people. In this mini review, we introduce the merits and demerits of animal therapy. Then we explain the human-interactive robot for psychological enrichment, the required functions for therapeutic robots, and the seal robot. Finally, we provide examples of robot therapy for elderly people, including dementia patients. Copyright © 2010 S. Karger AG, Basel.
New diagnostic tool for robotic psychology and robotherapy studies.
Libin, Elena; Libin, Alexander
2003-08-01
Robotic psychology and robotherapy as a new research area employs a systematic approach in studying psycho-physiological, psychological, and social aspects of person-robot communication. An analysis of the mechanisms underlying different forms of computer-mediated behavior requires both an adequate methodology and research tools. In the proposed article we discuss the concept, basic principles, structure, and contents of the newly designed Person-Robot Complex Interactive Scale (PRCIS), proposed for the purpose of investigating psychological specifics and therapeutic potentials of multilevel person-robot interactions. Assuming that human-robot communication has symbolic meaning, each interactive pattern evaluated via the newly developed scale is assigned certain psychological value associated with the person's past life experiences, likes and dislikes, emotional, cognitive, and behavioral traits or states. PRCIS includes (1) assessment of a person's individual style of communication with the robotic creature based on direct observations; (2) the participant's evaluation of his/her new experiences with an interactive robot and evaluation of its features, advantages and disadvantages, as well as past experiences with modern technology; and (3) the instructor's overall evaluation of the session.
Toward a framework for levels of robot autonomy in human-robot interaction
Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.
2017-01-01
A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence – and are influenced by – robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot’s autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA. PMID:29082107
User Localization During Human-Robot Interaction
Alonso-Martín, F.; Gorostiza, Javi F.; Malfaz, María; Salichs, Miguel A.
2012-01-01
This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented. PMID:23012577
User localization during human-robot interaction.
Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A
2012-01-01
This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.
Interactive Games with an Assistive Robotic System for Hearing-Impaired Children.
Uluer, Pinar; Akalin, Neziha; Gurpinar, Cemal; Kose, Hatice
2017-01-01
This paper presents an assistive robotic system, which can recognize and express sign language words from a predefined set, within interactive games to communicate with and teach hearing-impaired children sign language. The robotic system uses audio, visual and tactile feedback for interaction with the children and the teacher/researcher.
A taxonomy for user-healthcare robot interaction.
Bzura, Conrad; Im, Hosung; Liu, Tammy; Malehorn, Kevin; Padir, Taskin; Tulu, Bengisu
2012-01-01
This paper evaluates existing taxonomies aimed at characterizing the interaction between robots and their users and modifies them for health care applications. The modifications are based on existing robot technologies and user acceptance of robotics. Characterization of the user, or in this case the patient, is a primary focus of the paper, as they present a unique new role as robot users. While therapeutic and monitoring-related applications for robots are still relatively uncommon, we believe they will begin to grow and thus it is important that the spurring relationship between robot and patient is well understood.
Anthropomorphic Robot Design and User Interaction Associated with Motion
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2016-01-01
Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement.
Multi-function robots with speech interaction and emotion feedback
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Lou, Guanting; Ma, Mengchao
2018-03-01
Nowadays, the service robots have been applied in many public circumstances; however, most of them still don’t have the function of speech interaction, especially the function of speech-emotion interaction feedback. To make the robot more humanoid, Arduino microcontroller was used in this study for the speech recognition module and servo motor control module to achieve the functions of the robot’s speech interaction and emotion feedback. In addition, W5100 was adopted for network connection to achieve information transmission via Internet, providing broad application prospects for the robot in the area of Internet of Things (IoT).
Information Foraging and Change Detection for Automated Science Exploration
NASA Technical Reports Server (NTRS)
Furlong, P. Michael; Dille, Michael
2016-01-01
This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective is to free remote scientists from possibly-infeasible extensive preliminary site investigation prior to sending robotic agents. We simulate a common exploration task for an autonomous robot sampling the environment at various locations and compare performance against simpler control strategies. An extension is proposed and evaluated that further permits operation in the presence of environmental variability in which the robot encounters a change in the distribution underlying sampling targets. Experimental results indicate a strong improvement in performance across varied parameter choices for the scenario.
Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains
2010-12-01
In In Proceedings of the 1996 Symposium on Human Interaction and Complex Systems, pages 276–283, 1996. 6.1 [15] Colin Campbell and Kristin P. Bennett...ISBN 0-262-19450-3. 5.1 [104] Jean Scholtz, Jeff Young, Jill L. Drury , and Holly A. Yanco. Evaluation of human-robot interaction awareness in search...2004. 6.1 [147] Holly A. Yanco and Jill L. Drury . Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI robot rescue
Flexible robotics: a new paradigm.
Aron, Monish; Haber, Georges-Pascal; Desai, Mihir M; Gill, Inderbir S
2007-05-01
The use of robotics in urologic surgery has seen exponential growth over the last 5 years. Existing surgical robots operate rigid instruments on the master/slave principle and currently allow extraluminal manipulations and surgical procedures. Flexible robotics is an entirely novel paradigm. This article explores the potential of flexible robotic platforms that could permit endoluminal and transluminal surgery in the future. Computerized catheter-control systems are being developed primarily for cardiac applications. This development is driven by the need for precise positioning and manipulation of the catheter tip in the three-dimensional cardiovascular space. Such systems employ either remote navigation in a magnetic field or a computer-controlled electromechanical flexible robotic system. We have adapted this robotic system for flexible ureteropyeloscopy and have to date completed the initial porcine studies. Flexible robotics is on the horizon. It has potential for improved scope-tip precision, superior operative ergonomics, and reduced occupational radiation exposure. In the near future, in urology, we believe that it holds promise for endoluminal therapeutic ureterorenoscopy. Looking further ahead, within the next 3-5 years, it could enable transluminal surgery.
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction
XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN
2016-01-01
We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875
Velocity-curvature patterns limit human-robot physical interaction
Maurice, Pauline; Huber, Meghan E.; Hogan, Neville; Sternad, Dagmar
2018-01-01
Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration. PMID:29744380
Velocity-curvature patterns limit human-robot physical interaction.
Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar
2018-01-01
Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.
Design, characterization and control of the Unique Mobility Corporation robot
NASA Technical Reports Server (NTRS)
Velasco, Virgilio B., Jr.; Newman, Wyatt S.; Steinetz, Bruce; Kopf, Carlo; Malik, John
1994-01-01
Space and mass are at a premium on any space mission, and thus any machinery designed for space use should be lightweight and compact, without sacrificing strength. It is for this reason that NASA/LeRC contracted Unique Mobility Corporation to exploit their novel actuator designs to build a robot that would advance the present state of technology with respect to these requirements. Custom-designed motors are the key feature of this robot. They are compact, high-performance dc brushless servo motors with a high pole count and low inductance, thus permitting high torque generation and rapid phase commutation. Using a custom-designed digital signal processor-based controller board, the pulse width modulation power amplifiers regulate the fast dynamics of the motor currents. In addition, the programmable digital signal processor (DSP) controller permits implementation of nonlinear compensation algorithms to account for motoring vs. regeneration, torque ripple, and back-EMF. As a result, the motors produce a high torque relative to their size and weight, and can do so with good torque regulation and acceptably high velocity saturation limits. This paper presents the Unique Mobility Corporation robot prototype: its actuators, its kinematic design, its control system, and its experimental characterization. Performance results, including saturation torques, saturation velocities and tracking accuracy tests are included.
Animal Robot Assisted-therapy for Rehabilitation of Patient with Post-Stroke Depression
NASA Astrophysics Data System (ADS)
Zikril Zulkifli, Winal; Shamsuddin, Syamimi; Hwee, Lim Thiam
2017-06-01
Recently, the utilization of therapeutic animal robots has expanded. This research aims to explore robotics application for mental healthcare in Malaysia through human-robot interaction (HRI). PARO, the robotic seal PARO was developed to give psychological effects on humans. Major Depressive Disorder (MDD) is a common but severe mood disorder. This study focuses on the interaction protocol between PARO and patients with MDD. Initially, twelve rehabilitation patients gave subjective evaluation on their first interaction with PARO. Next, therapeutic interaction environment was set-up with PARO in it to act as an augmentation strategy with other psychological interventions for post-stroke depression. Patient was exposed to PARO for 20 minutes. The results of behavioural analysis complemented with information from HRI survey question. The analysis also observed that the individual interactors engaged with the robot in diverse ways based on their needs Results show positive reaction toward the acceptance of an animal robot. Next, therapeutic interaction is set-up for PARO to contribute as an augmentation strategy with other psychological interventions for post-stroke depression. The outcome is to reduce the stress level among patients through facilitated therapy session with PARO
Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social
Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka
2017-01-01
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. PMID:29046651
Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.
Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka
2017-01-01
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Peer-to-Peer Human-Robot Interaction for Space Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Nourbakhsh, Illah
2004-01-01
NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration. To support this mission, we are working to improve human-robot interaction and performance on planetary surfaces. Rather than building robots that function as glorified tools, our focus is to enable humans and robots to work as partners and peers. In this paper. we describe our approach, which includes contextual dialogue, cognitive modeling, and metrics-based field testing.
Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank
2017-12-14
Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.
Control strategies for robots in contact
NASA Astrophysics Data System (ADS)
Park, Jaeheung
In the field of robotics, there is a growing need to provide robots with the ability to interact with complex and unstructured environments. Operations in such environments pose significant challenges in terms of sensing, planning, and control. In particular, it is critical to design control algorithms that account for the dynamics of the robot and environment at multiple contacts. The work in this thesis focuses on the development of a control framework that addresses these issues. The approaches are based on the operational space control framework and estimation methods. By accounting for the dynamics of the robot and environment, modular and systematic methods are developed for robots interacting with the environment at multiple locations. The proposed force control approach demonstrates high performance in the presence of uncertainties. Building on this basic capability, new control algorithms have been developed for haptic teleoperation, multi-contact interaction with the environment, and whole body motion of non-fixed based robots. These control strategies have been experimentally validated through simulations and implementations on physical robots. The results demonstrate the effectiveness of the new control structure and its robustness to uncertainties. The contact control strategies presented in this thesis are expected to contribute to the needs in advanced controller design for humanoid and other complex robots interacting with their environments.
Metaphors to Drive By: Exploring New Ways to Guide Human-Robot Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
David J. Bruemmer; David I. Gertman; Curtis W. Nielsen
2007-08-01
Autonomous behaviors created by the research and development community are not being extensively utilized within energy, defense, security, or industrial contexts. This paper provides evidence that the interaction methods used alongside these behaviors may not provide a mental model that can be easily adopted or used by operators. Although autonomy has the potential to reduce overall workload, the use of robot behaviors often increased the complexity of the underlying interaction metaphor. This paper reports our development of new metaphors that support increased robot complexity without passing the complexity of the interaction onto the operator. Furthermore, we illustrate how recognition ofmore » problems in human-robot interactions can drive the creation of new metaphors for design and how human factors lessons in usability, human performance, and our social contract with technology have the potential for enormous payoff in terms of establishing effective, user-friendly robot systems when appropriate metaphors are used.« less
Soft-rigid interaction mechanism towards a lobster-inspired hybrid actuator
NASA Astrophysics Data System (ADS)
Chen, Yaohui; Wan, Fang; Wu, Tong; Song, Chaoyang
2018-01-01
Soft pneumatic actuators (SPAs) are intrinsically light-weight, compliant and therefore ideal to directly interact with humans and be implemented into wearable robotic devices. However, they also pose new challenges in describing and sensing their continuous deformation. In this paper, we propose a hybrid actuator design with bio-inspirations from the lobsters, which can generate reconfigurable bending movements through the internal soft chamber interacting with the external rigid shells. This design with joint and link structures enables us to exactly track its bending configurations that previously posed a significant challenge to soft robots. Analytic models are developed to illustrate the soft-rigid interaction mechanism with experimental validation. A robotic glove using hybrid actuators to assist grasping is assembled to illustrate their potentials in safe human-robot interactions. Considering all the design merits, our work presents a practical approach to the design of next-generation robots capable of achieving both good accuracy and compliance.
We perceive a mind in a robot when we help it
Hashimoto, Takaaki; Karasawa, Kaori
2017-01-01
People sometimes perceive a mind in inorganic entities like robots. Psychological research has shown that mind perception correlates with moral judgments and that immoral behaviors (i.e., intentional harm) facilitate mind perception toward otherwise mindless victims. We conducted a vignette experiment (N = 129; Mage = 21.8 ± 6.0 years) concerning human-robot interactions and extended previous research’s results in two ways. First, mind perception toward the robot was facilitated when it received a benevolent behavior, although only when participants took the perspective of an actor. Second, imagining a benevolent interaction led to more positive attitudes toward the robot, and this effect was mediated by mind perception. These results help predict what people’s reactions in future human-robot interactions would be like, and have implications for how to design future social rules about the treatment of robots. PMID:28727735
McColl, Derek; Jiang, Chuan; Nejat, Goldie
2017-02-01
For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.
Eyeblink Synchrony in Multimodal Human-Android Interaction.
Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro
2016-12-23
As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.
The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.
Arnold, Thomas; Scheutz, Matthias
2017-06-01
Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.
Interactive-rate Motion Planning for Concentric Tube Robots.
Torres, Luis G; Baykal, Cenk; Alterovitz, Ron
2014-05-01
Concentric tube robots may enable new, safer minimally invasive surgical procedures by moving along curved paths to reach difficult-to-reach sites in a patient's anatomy. Operating these devices is challenging due to their complex, unintuitive kinematics and the need to avoid sensitive structures in the anatomy. In this paper, we present a motion planning method that computes collision-free motion plans for concentric tube robots at interactive rates. Our method's high speed enables a user to continuously and freely move the robot's tip while the motion planner ensures that the robot's shaft does not collide with any anatomical obstacles. Our approach uses a highly accurate mechanical model of tube interactions, which is important since small movements of the tip position may require large changes in the shape of the device's shaft. Our motion planner achieves its high speed and accuracy by combining offline precomputation of a collision-free roadmap with online position control. We demonstrate our interactive planner in a simulated neurosurgical scenario where a user guides the robot's tip through the environment while the robot automatically avoids collisions with the anatomical obstacles.
Qian, Feifei; Zhang, Tingnan; Korff, Wyatt; Umbanhowar, Paul B; Full, Robert J; Goldman, Daniel I
2015-10-08
Natural substrates like sand, soil, leaf litter and snow vary widely in penetration resistance. To search for principles of appendage design in robots and animals that permit high performance on such flowable ground, we developed a ground control technique by which the penetration resistance of a dry granular substrate could be widely and rapidly varied. The approach was embodied in a device consisting of an air fluidized bed trackway in which a gentle upward flow of air through the granular material resulted in a decreased penetration resistance. As the volumetric air flow, Q, increased to the fluidization transition, the penetration resistance decreased to zero. Using a bio-inspired hexapedal robot as a physical model, we systematically studied how locomotor performance (average forward speed, v(x)) varied with ground penetration resistance and robot leg frequency. Average robot speed decreased with increasing Q, and decreased more rapidly for increasing leg frequency, ω. A universal scaling model revealed that the leg penetration ratio (foot pressure relative to penetration force per unit area per depth and leg length) determined v(x) for all ground penetration resistances and robot leg frequencies. To extend our result to include continuous variation of locomotor foot pressure, we used a resistive force theory based terradynamic approach to perform numerical simulations. The terradynamic model successfully predicted locomotor performance for low resistance granular states. Despite variation in morphology and gait, the performance of running lizards, geckos and crabs on flowable ground was also influenced by the leg penetration ratio. In summary, appendage designs which reduce foot pressure can passively maintain minimal leg penetration ratio as the ground weakens, and consequently permits maintenance of effective locomotion over a range of terradynamically challenging surfaces.
Intelligence for Human-Assistant Planetary Surface Robots
NASA Technical Reports Server (NTRS)
Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.
2006-01-01
The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.
Evolution and advanced technology. [of Flight Telerobotic Servicer
NASA Technical Reports Server (NTRS)
Ollendorf, Stanford; Pennington, Jack E.; Hansen, Bert, III
1990-01-01
The NASREM architecture with its standard interfaces permits development and evolution of the Flight Telerobotic Servicer to greater autonomy. Technologies in control strategies for an arm with seven DOF, including a safety system containing skin sensors for obstacle avoidance, are being developed. Planning and robotic execution software includes symbolic task planning, world model data bases, and path planning algorithms. Research over the last five years has led to the development of laser scanning and ranging systems, which use coherent semiconductor laser diodes for short range sensing. The possibility of using a robot to autonomously assemble space structures is being investigated. A control framework compatible with NASREM is being developed that allows direct global control of the manipulator. Researchers are developing systems that permit an operator to quickly reconfigure the telerobot to do new tasks safely.
Interacting With Robots to Investigate the Bases of Social Interaction.
Sciutti, Alessandra; Sandini, Giulio
2017-12-01
Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.
Simut, Ramona E; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram
2016-01-01
Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child.
Human-Robot Interaction: Status and Challenges.
Sheridan, Thomas B
2016-06-01
The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.
Service innovation through social robot engagement to improve dementia care quality.
Chu, Mei-Tai; Khosla, Rajiv; Khaksar, Seyed Mohammad Sadegh; Nguyen, Khanh
2017-01-01
Assistive technologies, such as robots, have proven to be useful in a social context and to improve the quality of life for people with dementia (PwD). This study aims to show how the engagement between two social robots and PwD in Australian residential care facilities can improve care quality. An observational method is adopted in the research methodology to discover behavioural patterns during interactions between the robots and PwD. This observational study has undertaken to explore the improvement arising from: (1) approaching social baby-face robots (AR), (2) experiencing pleasure engaging with the robots (P), (3) interacting with the robots (IR), and (4) interacting with others (IO). The findings show that social robots can improve diversion therapy service value to PwD through sensory enrichment, positive social engagement, and entertainment. More than 11,635 behavioral reactions, such as facial expressions and gestures, from 139 PwD over 5 years were coded, in order to identify the engagement effectiveness between PwD and two social robots named Sophie and Jack. The results suggest that these innovative social robots can improve the quality of care for people suffering from dementia.
Robots for use in autism research.
Scassellati, Brian; Admoni, Henny; Matarić, Maja
2012-01-01
Autism spectrum disorders are a group of lifelong disabilities that affect people's ability to communicate and to understand social cues. Research into applying robots as therapy tools has shown that robots seem to improve engagement and elicit novel social behaviors from people (particularly children and teenagers) with autism. Robot therapy for autism has been explored as one of the first application domains in the field of socially assistive robotics (SAR), which aims to develop robots that assist people with special needs through social interactions. In this review, we discuss the past decade's work in SAR systems designed for autism therapy by analyzing robot design decisions, human-robot interactions, and system evaluations. We conclude by discussing challenges and future trends for this young but rapidly developing research area.
Dionisio, Valdeci C; Brown, David A
2016-06-16
Collaborative robots are used in rehabilitation and are designed to interact with the client so as to provide the ability to assist walking therapeutically. One such device is the KineAssist which was designed to interact, either in a self-driven mode (SDM) or in an assist mode (AM), with neurologically-impaired individuals while they are walking on a treadmill surface. To understand the level of transparency (i.e., interference with movement due to the mechanical interface) between human and robot, and to estimate and account for changes in the kinetics and kinematics of the gait pattern, we tested the KineAssist under conditions of self-drive and horizontal push assistance. The aims of this study were to compare the joint kinematics, forces and moments during walking at a fixed constant treadmill belt speed and constrained walking cadence, with and without the robotic device (OUT) and to compare the biomechanics of assistive and self-drive modes in the device. Twenty non-neurologically impaired adults participated in this study. We evaluated biomechanical parameters of walking at a fixed constant treadmill belt speed (1.0 m/s), with and without the robotic device in assistive mode. We also tested the self-drive condition, which enables the user to drive the speed and direction of a treadmill belt. Hip, knee and ankle angular displacements, ground reaction forces, hip, knee and ankle moments, and center of mass displacement were compared "in" vs "out" of the device. A repeated measures ANOVA test was applied with the three level factor of condition (OUT, AM, and SDM), and each participant was used as its own comparison. When comparing "in" and "out" of the device, we did not observe any interruptions and/or reversals of direction of the basic gait pattern trajectory, but there was increased ankle and hip angular excursions, vertical ground reaction force and hip moments and reduced center of mass displacement during the "in device" condition. Comparing assistive vs self-drive mode in device, participants had greater flexed posture and accentuated hip moments and propulsive force, but reduced braking force. Although the magnitudes and/or range of certain gait pattern components were altered by the device, we did not observe any interruption from the mechanical interface upon the advancement of the trajectories nor reversals in direction of movement which suggests that the KineAssist permits relative transparency (i.e.. lack of interference of movement by the device mechanism) to the individual's gait pattern. However, there are interactive forces to take into account, which appear to be overcome by kinematic and kinetic adjustments.
Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.
Hongbo Wang; Kosuge, K
2012-01-01
Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.
Rehabilitation exoskeletal robotics. The promise of an emerging field.
Pons, José L
2010-01-01
Exoskeletons are wearable robots exhibiting a close cognitive and physical interaction with the human user. These are rigid robotic exoskeletal structures that typically operate alongside human limbs. Scientific and technological work on exoskeletons began in the early 1960s but have only recently been applied to rehabilitation and functional substitution in patients suffering from motor disorders. Key topics for further development of exoskeletons in rehabilitation scenarios include the need for robust human-robot multimodal cognitive interaction, safe and dependable physical interaction, true wearability and portability, and user aspects such as acceptance and usability. This discussion provides an overview of these aspects and draws conclusions regarding potential future research directions in robotic exoskeletons.
Performance and stability of telemanipulators using bilateral impedance control. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moore, Christopher Lane
1991-01-01
A new method of control for telemanipulators called bilateral impedance control is investigated. This new method differs from previous approaches in that interaction forces are used as the communication signals between the master and slave robots. The new control architecture has several advantages: (1) It allows the master robot and the slave robot to be stabilized independently without becoming involved in the overall system dynamics; (2) It permits the system designers to arbitrarily specify desired performance characteristics such as the force and position ratios between the master and slave; (3) The impedance at both ends of the telerobotic system can be modulated to suit the requirements of the task. The main goals of the research are to characterize the performance and stability of the new control architecture. The dynamics of the telerobotic system are described by a bond graph model that illustrates how energy is transformed, stored, and dissipated. Performance can be completely described by a set of three independent parameters. These parameters are fundamentally related to the structure of the H matrix that regulates the communication of force signals within the system. Stability is analyzed with two mathematical techniques: the Small Gain Theorem and the Multivariable Nyquist Criterion. The theoretical predictions for performance and stability are experimentally verified by implementing the new control architecture on a multidegree of freedom telemanipulator.
INL Autonomous Navigation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Autonomous Navigation System provides instructions for autonomously navigating a robot. The system permits high-speed autonomous navigation including obstacle avoidance, waypoing navigation and path planning in both indoor and outdoor environments.
Sensor Control of Robot Arc Welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1983-01-01
The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.
Mars Robotics in the Elementary School
NASA Astrophysics Data System (ADS)
Bonett, D.
2003-05-01
Kenneth E. Little Elementary is a public school grades Pre-K to 5th in Bacliff, Texas. It has an ethnically diverse population of one-thousand boys and girls. It is a Title 1 school with eighty-six percent of the students receiving free or reduced meals. K.E. Little has a large at-risk population with a thirty-three percent transition rate. The Young Astronauts @ K.E. Little is an on-going afterschool space science program in it's third year of operation. Thirty students,fourth and fifth grade, were involved in our spring robotics program. Each co-operative group was assigned a LEGO robotics kit to inventory,organize, and familiarize themselves with. Each team made decisions, by consensus, concerning the robots design and capabilities. Students used the Dell Computer Lab on campus to program their robots. Although time did not permit the construction of a simulated Martian landscape, future Young Astronauts will continue this project in January 2004.
New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots
Gonzalez-de-Soto, Mariano; Pajares, Gonzalo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976
New trends in robotics for agriculture: integration and assessment of a real fleet of robots.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.
Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II
2011-09-01
for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR
ERIC Educational Resources Information Center
Simut, Ramona E.; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram
2016-01-01
Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if…
Eyeblink Synchrony in Multimodal Human-Android Interaction
Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro
2016-01-01
As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human’s attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners’ eyeblinks were entrained to android speakers’ eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android’s hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions. PMID:28009014
NASA Astrophysics Data System (ADS)
Malik, Norjasween Abdul; Shamsuddin, Syamimi; Yussof, Hanafiah; Azfar Miskam, Mohd; Che Hamid, Aminullah
2013-12-01
Research evidences are accumulating with regards to the potential use of robots for the rehabilitation of children with autism. The purpose of this paper is to elaborate on the results of communicational response in two children with autism during interaction with the humanoid robot NAO. Both autistic subjects in this study have been diagnosed with mild autism. Following the outcome from our first pilot study; the aim of this current experiment is to explore the application of NAO robot to engage with a child and further teach about emotions through a game-centered and song-based approach. The experiment procedure involved interaction between humanoid robot NAO with each child through a series of four different modules. The observation items are based on ten items selected and referenced to GARS-2 (Gilliam Autism Rating Scale-second edition) and also input from clinicians and therapists. The results clearly indicated that both of the children showed optimistic response through the interaction. Negative responses such as feeling scared or shying away from the robot were not detected. Two-way communication between the child and robot in real time significantly gives positive impact in the responses towards the robot. To conclude, it is feasible to include robot-based interaction specifically to elicit communicational response as a part of the rehabilitation intervention of children with autism.
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
Sung, Huei-Chuan; Chang, Shu-Min; Chin, Mau-Yu; Lee, Wen-Li
2015-03-01
Animal-assisted therapy is gaining popularity as part of therapeutic activities for older adults in many long-term care facilities. However, concerns about dog bites, allergic responses to pets, disease, and insufficient available resources to care for a real pet have led to many residential care facilities to ban this therapy. There are situations where a substitute artificial companion, such as robotic pet, may serve as a better alternative. This pilot study used a one-group pre- and posttest design to evaluate the effect of a robot-assisted therapy for older adults. Sixteen eligible participants participated in the study and received a group robot-assisted therapy using a seal-like robot pet for 30 minutes twice a week for 4 weeks. All participants received assessments of their communication and interaction skills using the Assessment of Communication and Interaction Skills (ACIS-C) and activity participation using the Activity Participation Scale at baseline and at week 4. A total of 12 participants completed the study. Wilcoxon signed rank test showed that participants' communication and interaction skills (z = -2.94, P = 0.003) and activity participation (z = -2.66, P = 0.008) were significantly improved after receiving 4-week robot-assisted therapy. By interacting with a robot pet, such as Paro, the communication, interaction skills, and activity participation of the older adults can be improved. The robot-assisted therapy can be provided as a routine activity program and has the potential to improve social health of older adults in residential care facilities. Copyright © 2014 Wiley Publishing Asia Pty Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.E.
Many robotic operations, e.g., mapping, scanning, feature following, etc., require accurate surface following of arbitrary targets. This paper presents a versatile surface following and mapping system designed to promote hardware, software and application independence, modular development, and upward expandability. These goals are met by: a full, a priori specification of the hardware and software interfaces; a modular system architecture; and a hierarchical surface-data analysis method, permitting application specific tuning at each conceptual level of topological abstraction. This surface following system was fully designed and independently of any specific robotic host, then successfully integrated with and demonstrated on a completely amore » priori unknown, real-time robotic system. 7 refs.« less
Loving Machines: Theorizing Human and Sociable-Technology Interaction
NASA Astrophysics Data System (ADS)
Shaw-Garlock, Glenda
Today, human and sociable-technology interaction is a contested site of inquiry. Some regard social robots as an innovative medium of communication that offer new avenues for expression, communication, and interaction. Other others question the moral veracity of human-robot relationships, suggesting that such associations risk psychological impoverishment. What seems clear is that the emergence of social robots in everyday life will alter the nature of social interaction, bringing with it a need for new theories to understand the shifting terrain between humans and machines. This work provides a historical context for human and sociable robot interaction. Current research related to human-sociable-technology interaction is considered in relation to arguments that confront a humanist view that confine 'technological things' to the nonhuman side of the human/nonhuman binary relation. Finally, it recommends a theoretical approach for the study of human and sociable-technology interaction that accommodates increasingly personal relations between human and nonhuman technologies.
ERIC Educational Resources Information Center
Dunst, Carl J.; Trivette, Carol M.; Prior, Jeremy; Hamby, Deborah W.; Embler, Davon
2013-01-01
Findings from a survey of parents' ratings of seven different human-like qualities of four socially interactive robots are reported. The four robots were Popchilla, Keepon, Kaspar, and CosmoBot. The participants were 96 parents and other primary caregivers of young children with disabilities 1 to 12 years of age. Results showed that Popchilla, a…
ERIC Educational Resources Information Center
Dunst, Carl J.; Trivette, Carol M.; Prior, Jeremy; Hamby, Deborah W.; Embler, Davon
2013-01-01
A number of different types of socially interactive robots are being used as part of interventions with young children with disabilities to promote their joint attention and language skills. Parents' judgments of two dimensions (acceptance and importance) of the social validity of four different social robots were the focus of the study described…
Teaching Human Poses Interactively to a Social Robot
Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A.
2013-01-01
The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics. PMID:24048336
Teaching human poses interactively to a social robot.
Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A
2013-09-17
The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher's explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.
Evaluation of microsurgical tasks with OCT-guided and/or robot-assisted ophthalmic forceps
Yu, Haoran; Shen, Jin-Hui; Shah, Rohan J.; Simaan, Nabil; Joos, Karen M.
2015-01-01
Real-time intraocular optical coherence tomography (OCT) visualization of tissues with surgical feedback can enhance retinal surgery. An intraocular 23-gauge B-mode forward-imaging co-planar OCT-forceps, coupling connectors and algorithms were developed to form a unique ophthalmic surgical robotic system. Approach to the surface of a phantom or goat retina by a manual or robotic-controlled forceps, with and without real-time OCT guidance, was performed. Efficiency of lifting phantom membranes was examined. Placing the co-planar OCT imaging probe internal to the surgical tool reduced instrument shadowing and permitted constant tracking. Robotic assistance together with real-time OCT feedback improved depth perception accuracy. The first-generation integrated OCT-forceps was capable of peeling membrane phantoms despite smooth tips. PMID:25780736
Digital redesign of the control system for the Robotics Research Corporation model K-1607 robot
NASA Technical Reports Server (NTRS)
Carroll, Robert L.
1989-01-01
The analog control system for positioning each link of the Robotics Research Corporation Model K-1607 robot manipulator was redesigned for computer control. In order to accomplish the redesign, a linearized model of the dynamic behavior of the robot was developed. The parameters of the model were determined by examination of the input-output data collected in closed-loop operation of the analog control system. The robot manipulator possesses seven degrees of freedom in its motion. The analog control system installed by the manufacturer of the robot attempts to control the positioning of each link without feedback from other links. Constraints on the design of a digital control system include: the robot cannot be disassembled for measurement of parameters; the digital control system must not include filtering operations if possible, because of lack of computer capability; and criteria of goodness of control system performing is lacking. The resulting design employs sampled-data position and velocity feedback. The criteria of the design permits the control system gain margin and phase margin, measured at the same frequencies, to be the same as that provided by the analog control system.
Multi-Axis Force Sensor for Human-Robot Interaction Sensing in a Rehabilitation Robotic Device.
Grosu, Victor; Grosu, Svetlana; Vanderborght, Bram; Lefeber, Dirk; Rodriguez-Guerrero, Carlos
2017-06-05
Human-robot interaction sensing is a compulsory feature in modern robotic systems where direct contact or close collaboration is desired. Rehabilitation and assistive robotics are fields where interaction forces are required for both safety and increased control performance of the device with a more comfortable experience for the user. In order to provide an efficient interaction feedback between the user and rehabilitation device, high performance sensing units are demanded. This work introduces a novel design of a multi-axis force sensor dedicated for measuring pelvis interaction forces in a rehabilitation exoskeleton device. The sensor is conceived such that it has different sensitivity characteristics for the three axes of interest having also movable parts in order to allow free rotations and limit crosstalk errors. Integrated sensor electronics make it easy to acquire and process data for a real-time distributed system architecture. Two of the developed sensors are integrated and tested in a complex gait rehabilitation device for safe and compliant control.
Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface
De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara
2011-01-01
A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574
Molecular Robots Obeying Asimov's Three Laws of Robotics.
Kaminka, Gal A; Spokoini-Stern, Rachel; Amir, Yaniv; Agmon, Noa; Bachelet, Ido
2017-01-01
Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920-1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story "Runaround," in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.
Interactions With Robots: The Truths We Reveal About Ourselves.
Broadbent, Elizabeth
2017-01-03
In movies, robots are often extremely humanlike. Although these robots are not yet reality, robots are currently being used in healthcare, education, and business. Robots provide benefits such as relieving loneliness and enabling communication. Engineers are trying to build robots that look and behave like humans and thus need comprehensive knowledge not only of technology but also of human cognition, emotion, and behavior. This need is driving engineers to study human behavior toward other humans and toward robots, leading to greater understanding of how humans think, feel, and behave in these contexts, including our tendencies for mindless social behaviors, anthropomorphism, uncanny feelings toward robots, and the formation of emotional attachments. However, in considering the increased use of robots, many people have concerns about deception, privacy, job loss, safety, and the loss of human relationships. Human-robot interaction is a fascinating field and one in which psychologists have much to contribute, both to the development of robots and to the study of human behavior.
Direct interaction with an assistive robot for individuals with chronic stroke.
Kmetz, Brandon; Markham, Heather; Brewer, Bambi R
2011-01-01
Many robotic systems have been developed to provide assistance to individuals with disabilities. Most of these systems require the individual to interact with the robot via a joystick or keypad, though some utilize techniques such as speech recognition or selection of objects with a laser pointer. In this paper, we describe a prototype system using a novel method of interaction with an assistive robot. A touch-sensitive skin enables the user to directly guide a robotic arm to a desired position. When the skin is released, the robot remains fixed in position. The target population for this system is individuals with hemiparesis due to chronic stroke. The system can be used as a substitute for the paretic arm and hand in bimanual tasks such as holding a jar while removing the lid. This paper describes the hardware and software of the prototype system, which includes a robotic arm, the touch-sensitive skin, a hook-style prehensor, and weight compensation and speech recognition software.
A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics
Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D.; Bianchi, Matteo
2017-01-01
Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions. PMID:28588473
A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.
Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo
2017-01-01
Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.
An Interactive Astronaut-Robot System with Gesture Control
Liu, Jinguo; Luo, Yifan; Ju, Zhaojie
2016-01-01
Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system. PMID:27190503
Intelligent robot control using an adaptive critic with a task control center and dynamic database
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.
2006-10-01
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
Hazardous materials emergency response mobile robot
NASA Technical Reports Server (NTRS)
Stone, Henry W. (Inventor); Lloyd, James (Inventor); Alahuzos, George (Inventor)
1992-01-01
A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.
Honig, Shanee; Oron-Gilad, Tal
2018-01-01
While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and feelings toward robots, and how these effects can be mitigated. Fifty-two studies were identified relating to communicating failures and their causes, the influence of failures on human-robot interaction (HRI), and mitigating failures. Since little research has been done on these topics within the HRI community, insights from the fields of human computer interaction (HCI), human factors engineering, cognitive engineering and experimental psychology are presented and discussed. Based on the literature, we developed a model of information processing for robotic failures (Robot Failure Human Information Processing, RF-HIP), that guides the discussion of our findings. The model describes the way people perceive, process, and act on failures in human robot interaction. The model includes three main parts: (1) communicating failures, (2) perception and comprehension of failures, and (3) solving failures. Each part contains several stages, all influenced by contextual considerations and mitigation strategies. Several gaps in the literature have become evident as a result of this evaluation. More focus has been given to technical failures than interaction failures. Few studies focused on human errors, on communicating failures, or the cognitive, psychological, and social determinants that impact the design of mitigation strategies. By providing the stages of human information processing, RF-HIP can be used as a tool to promote the development of user-centered failure-handling strategies for HRIs.
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
A Robotic Coach Architecture for Elder Care (ROCARE) Based on Multi-user Engagement Models
Fan, Jing; Bian, Dayi; Zheng, Zhi; Beuscher, Linda; Newhouse, Paul A.; Mion, Lorraine C.; Sarkar, Nilanjan
2017-01-01
The aging population with its concomitant medical conditions, physical and cognitive impairments, at a time of strained resources, establishes the urgent need to explore advanced technologies that may enhance function and quality of life. Recently, robotic technology, especially socially assistive robotics has been investigated to address the physical, cognitive, and social needs of older adults. Most system to date have predominantly focused on one-on-one human robot interaction (HRI). In this paper, we present a multi-user engagement-based robotic coach system architecture (ROCARE). ROCARE is capable of administering both one-on-one and multi-user HRI, providing implicit and explicit channels of communication, and individualized activity management for long-term engagement. Two preliminary feasibility studies, a one-on-one interaction and a triadic interaction with two humans and a robot, were conducted and the results indicated potential usefulness and acceptance by older adults, with and without cognitive impairment. PMID:28113672
A Robotic Coach Architecture for Elder Care (ROCARE) Based on Multi-User Engagement Models.
Fan, Jing; Bian, Dayi; Zheng, Zhi; Beuscher, Linda; Newhouse, Paul A; Mion, Lorraine C; Sarkar, Nilanjan
2017-08-01
The aging population with its concomitant medical conditions, physical and cognitive impairments, at a time of strained resources, establishes the urgent need to explore advanced technologies that may enhance function and quality of life. Recently, robotic technology, especially socially assistive robotics has been investigated to address the physical, cognitive, and social needs of older adults. Most system to date have predominantly focused on one-on-one human robot interaction (HRI). In this paper, we present a multi-user engagement-based robotic coach system architecture (ROCARE). ROCARE is capable of administering both one-on-one and multi-user HRI, providing implicit and explicit channels of communication, and individualized activity management for long-term engagement. Two preliminary feasibility studies, a one-on-one interaction and a triadic interaction with two humans and a robot, were conducted and the results indicated potential usefulness and acceptance by older adults, with and without cognitive impairment.
Damholdt, Malene F.; Nørskov, Marco; Yamazaki, Ryuji; Hakli, Raul; Hansen, Catharina Vesterager; Vestergaard, Christina; Seibt, Johanna
2015-01-01
Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed. PMID:26635646
Damholdt, Malene F; Nørskov, Marco; Yamazaki, Ryuji; Hakli, Raul; Hansen, Catharina Vesterager; Vestergaard, Christina; Seibt, Johanna
2015-01-01
Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.
ReACT!: An Interactive Educational Tool for AI Planning for Robotics
ERIC Educational Resources Information Center
Dogmus, Zeynep; Erdem, Esra; Patogulu, Volkan
2015-01-01
This paper presents ReAct!, an interactive educational tool for artificial intelligence (AI) planning for robotics. ReAct! enables students to describe robots' actions and change in dynamic domains without first having to know about the syntactic and semantic details of the underlying formalism, and to solve planning problems using…
ERIC Educational Resources Information Center
Flannery, Louise P.; Bers, Marina Umaschi
2013-01-01
Young learners today generate, express, and interact with sophisticated ideas using a range of digital tools to explore interactive stories, animations, computer games, and robotics. In recent years, new developmentally appropriate robotics kits have been entering early childhood classrooms. This paper presents a retrospective analysis of one…
Interaction between Task Oriented and Affective Information Processing in Cognitive Robotics
NASA Astrophysics Data System (ADS)
Haazebroek, Pascal; van Dantzig, Saskia; Hommel, Bernhard
There is an increasing interest in endowing robots with emotions. Robot control however is still often very task oriented. We present a cognitive architecture that allows the combination of and interaction between task representations and affective information processing. Our model is validated by comparing simulation results with empirical data from experimental psychology.
Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien
2018-02-01
Previous studies showed the existence of implicit interaction rules shared by human walkers when crossing each other. Especially, each walker contributes to the collision avoidance task and the crossing order, as set at the beginning, is preserved along the interaction. This order determines the adaptation strategy: the first arrived increases his/her advance by slightly accelerating and changing his/her heading, whereas the second one slows down and moves in the opposite direction. In this study, we analyzed the behavior of human walkers crossing the trajectory of a mobile robot that was programmed to reproduce this human avoidance strategy. In contrast with a previous study, which showed that humans mostly prefer to give the way to a non-reactive robot, we observed similar behaviors between human-human avoidance and human-robot avoidance when the robot replicates the human interaction rules. We discuss this result in relation with the importance of controlling robots in a human-like way in order to ease their cohabitation with humans. Copyright © 2017 Elsevier B.V. All rights reserved.
When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions
ERIC Educational Resources Information Center
Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman
2012-01-01
In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…
Hazardous materials emergency response mobile robot
NASA Technical Reports Server (NTRS)
Stone, Henry W. (Inventor); Lloyd, James W. (Inventor); Alahuzos, George A. (Inventor)
1995-01-01
A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm is presented. This robot inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.
Anthropomorphism in Human–Robot Co-evolution
Damiano, Luisa; Dumouchel, Paul
2018-01-01
Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth. PMID:29632507
Dynamical network interactions in distributed control of robots
NASA Astrophysics Data System (ADS)
Buscarino, Arturo; Fortuna, Luigi; Frasca, Mattia; Rizzo, Alessandro
2006-03-01
In this paper the dynamical network model of the interactions within a group of mobile robots is investigated and proposed as a possible strategy for controlling the robots without central coordination. Motivated by the results of the analysis of our simple model, we show that the system performance in the presence of noise can be improved by including long-range connections between the robots. Finally, a suitable strategy based on this model to control exploration and transport is introduced.
Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
From Autonomous Robots to Artificial Ecosystems
NASA Astrophysics Data System (ADS)
Mastrogiovanni, Fulvio; Sgorbissa, Antonio; Zaccaria, Renato
During the past few years, starting from the two mainstream fields of Ambient Intelligence [2] and Robotics [17], several authors recognized the benefits of the socalled Ubiquitous Robotics paradigm. According to this perspective, mobile robots are no longer autonomous, physically situated and embodied entities adapting themselves to a world taliored for humans: on the contrary, they are able to interact with devices distributed throughout the environment and get across heterogeneous information by means of communication technologies. Information exchange, coupled with simple actuation capabilities, is meant to replace physical interaction between robots and their environment. Two benefits are evident: (i) smart environments overcome inherent limitations of mobile platforms, whereas (ii) mobile robots offer a mobility dimension unknown to smart environments.
New Paradigms for Human-Robotic Collaboration During Human Planetary Exploration
NASA Astrophysics Data System (ADS)
Parrish, J. C.; Beaty, D. W.; Bleacher, J. E.
2017-02-01
Human exploration missions to other planetary bodies offer new paradigms for collaboration (control, interaction) between humans and robots beyond the methods currently used to control robots from Earth and robots in Earth orbit.
Cognitive and sociocultural aspects of robotized technology: innovative processes of adaptation
NASA Astrophysics Data System (ADS)
Kvesko, S. B.; Kvesko, B. B.; Kornienko, M. A.; Nikitina, Y. A.; Pankova, N. M.
2018-05-01
The paper dwells upon interaction between socio-cultural phenomena and cognitive characteristics of robotized technology. The interdisciplinary approach was employed in order to cast light on the manifold and multilevel identity of scientific advance in terms of robotized technology within the mental realm. Analyzing robotized technology from the viewpoint of its significance for the modern society is one of the upcoming trends in the contemporary scientific realm. The robots under production are capable of interacting with people; this results in a growing necessity for the studies on social status of robotized technological items. Socio-cultural aspect of cognitive robotized technology is reflected in the fact that the nature becomes ‘aware’ of itself via human brain, a human being tends to strives for perfection in their intellectual and moral dimensions.
Design principles of a cooperative robot controller
NASA Technical Reports Server (NTRS)
Hayward, Vincent; Hayati, Samad
1987-01-01
The paper describes the design of a controller for cooperative robots being designed at McGill University in a collaborative effort with the Jet Propulsion Laboratory. The first part of the paper discusses the background and motivation for multiple arm control. Then, a set of programming primitives, which are based on the RCCL system and which permit a programmer to specify cooperative tasks are described. The first group of primitives are motion primitives which specify asynchronous motions, master/slave motions, and cooperative motions. In the context of cooperative robots, trajectory generation issues will be discussed and the implementation described. A second set of primitives provides for the specification of spatial relationships. The relations between programming and control in the case of multiple robot are examined. Finally, the paper describes the allocation of various tasks among a set of microprocessors sharing a common bus.
Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.
Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung
2018-03-05
In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.
Terrain following of arbitrary surfaces using a high intensity LED proximity sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.E.
1992-01-01
Many robotic operations, e.g., mapping, scanning, feature following, etc., require accurate surface following of arbitrary targets. This paper presents a versatile surface following and mapping system designed to promote hardware, software and application independence, modular development, and upward expandability. These goals are met by: a full, a priori specification of the hardware and software interfaces; a modular system architecture; and a hierarchical surface-data analysis method, permitting application specific tuning at each conceptual level of topological abstraction. This surface following system was fully designed and independently of any specific robotic host, then successfully integrated with and demonstrated on a completely amore » priori unknown, real-time robotic system. 7 refs.« less
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.
Sharp, Ian; Patton, James; Listenberger, Molly; Case, Emily
2011-08-08
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Role of expressive behaviour for robots that learn from people.
Breazeal, Cynthia
2009-12-12
Robotics has traditionally focused on developing intelligent machines that can manipulate and interact with objects. The promise of personal robots, however, challenges researchers to develop socially intelligent robots that can collaborate with people to do things. In the future, robots are envisioned to assist people with a wide range of activities such as domestic chores, helping elders to live independently longer, serving a therapeutic role to help children with autism, assisting people undergoing physical rehabilitation and much more. Many of these activities shall require robots to learn new tasks, skills and individual preferences while 'on the job' from people with little expertise in the underlying technology. This paper identifies four key challenges in developing social robots that can learn from natural interpersonal interaction. The author highlights the important role that expressive behaviour plays in this process, drawing on examples from the past 8 years of her research group, the Personal Robots Group at the MIT Media Lab.
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Reversal Learning Task in Children with Autism Spectrum Disorder: A Robot-Based Approach.
Costescu, Cristina A; Vanderborght, Bram; David, Daniel O
2015-11-01
Children with autism spectrum disorder (ASD) engage in highly perseverative and inflexible behaviours. Technological tools, such as robots, received increased attention as social reinforces and/or assisting tools for improving the performance of children with ASD. The aim of our study is to investigate the role of the robotic toy Keepon in a cognitive flexibility task performed by children with ASD and typically developing (TD) children. The number of participants included in this study is 81 children: 40 TD children and 41 children with ASD. Each participant had to go through two conditions: robot interaction and human interaction in which they had performed the reversal learning task. Our primary outcomes are the number of errors from acquisition phase and from reversal phase of the task; as secondary outcomes we have measured attentional engagement and positive affect. The results of this study showed that children with ASD are more engaged in the task and they seem to enjoy more the task when interacting with the robot compared with the interaction with the adult. On the other hand their cognitive flexibility performance is, in general, similar in the robot and the human conditions with the exception of the learning phase where the robot can interfere with the performance. Implication for future research and practice are discussed.
Probabilistic self-localisation on a qualitative map based on occlusions
NASA Astrophysics Data System (ADS)
Santos, Paulo E.; Martins, Murilo F.; Fenelon, Valquiria; Cozman, Fabio G.; Dee, Hannah M.
2016-09-01
Spatial knowledge plays an essential role in human reasoning, permitting tasks such as locating objects in the world (including oneself), reasoning about everyday actions and describing perceptual information. This is also the case in the field of mobile robotics, where one of the most basic (and essential) tasks is the autonomous determination of the pose of a robot with respect to a map, given its perception of the environment. This is the problem of robot self-localisation (or simply the localisation problem). This paper presents a probabilistic algorithm for robot self-localisation that is based on a topological map constructed from the observation of spatial occlusion. Distinct locations on the map are defined by means of a classical formalism for qualitative spatial reasoning, whose base definitions are closer to the human categorisation of space than traditional, numerical, localisation procedures. The approach herein proposed was systematically evaluated through experiments using a mobile robot equipped with a RGB-D sensor. The results obtained show that the localisation algorithm is successful in locating the robot in qualitatively distinct regions.
Affordance Equivalences in Robotics: A Formalism
Andries, Mihai; Chavez-Garcia, Ricardo Omar; Chatila, Raja; Giusti, Alessandro; Gambardella, Luca Maria
2018-01-01
Automatic knowledge grounding is still an open problem in cognitive robotics. Recent research in developmental robotics suggests that a robot's interaction with its environment is a valuable source for collecting such knowledge about the effects of robot's actions. A useful concept for this process is that of an affordance, defined as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. This paper proposes a formalism for defining and identifying affordance equivalence. By comparing the elements of two affordances, we can identify equivalences between affordances, and thus acquire grounded knowledge for the robot. This is useful when changes occur in the set of actions or objects available to the robot, allowing to find alternative paths to reach goals. In the experimental validation phase we verify if the recorded interaction data is coherent with the identified affordance equivalences. This is done by querying a Bayesian Network that serves as container for the collected interaction data, and verifying that both affordances considered equivalent yield the same effect with a high probability. PMID:29937724
Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction.
Chen, Tiffany L; Bhattacharjee, Tapomayukh; McKay, J Lucas; Borinski, Jacquelyn E; Hackney, Madeleine E; Ting, Lena H; Kemp, Charles C
2015-01-01
Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions.
Evaluation by Expert Dancers of a Robot That Performs Partnered Stepping via Haptic Interaction
Chen, Tiffany L.; Bhattacharjee, Tapomayukh; McKay, J. Lucas; Borinski, Jacquelyn E.; Hackney, Madeleine E.; Ting, Lena H.; Kemp, Charles C.
2015-01-01
Our long-term goal is to enable a robot to engage in partner dance for use in rehabilitation therapy, assessment, diagnosis, and scientific investigations of two-person whole-body motor coordination. Partner dance has been shown to improve balance and gait in people with Parkinson's disease and in older adults, which motivates our work. During partner dance, dance couples rely heavily on haptic interaction to convey motor intent such as speed and direction. In this paper, we investigate the potential for a wheeled mobile robot with a human-like upper-body to perform partnered stepping with people based on the forces applied to its end effectors. Blindfolded expert dancers (N=10) performed a forward/backward walking step to a recorded drum beat while holding the robot's end effectors. We varied the admittance gain of the robot's mobile base controller and the stiffness of the robot's arms. The robot followed the participants with low lag (M=224, SD=194 ms) across all trials. High admittance gain and high arm stiffness conditions resulted in significantly improved performance with respect to subjective and objective measures. Biomechanical measures such as the human hand to human sternum distance, center-of-mass of leader to center-of-mass of follower (CoM-CoM) distance, and interaction forces correlated with the expert dancers' subjective ratings of their interactions with the robot, which were internally consistent (Cronbach's α=0.92). In response to a final questionnaire, 1/10 expert dancers strongly agreed, 5/10 agreed, and 1/10 disagreed with the statement "The robot was a good follower." 2/10 strongly agreed, 3/10 agreed, and 2/10 disagreed with the statement "The robot was fun to dance with." The remaining participants were neutral with respect to these two questions. PMID:25993099
A simple 5-DOF walking robot for space station application
NASA Technical Reports Server (NTRS)
Brown, H. Benjamin, Jr.; Friedman, Mark B.; Kanade, Takeo
1991-01-01
Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams.
Interaction model between capsule robot and intestine based on nonlinear viscoelasticity.
Zhang, Cheng; Liu, Hao; Tan, Renjia; Li, Hongyi
2014-03-01
Active capsule endoscope could also be called capsule robot, has been developed from laboratory research to clinical application. However, the system still has defects, such as poor controllability and failing to realize automatic checks. The imperfection of the interaction model between capsule robot and intestine is one of the dominating reasons causing the above problems. A model is hoped to be established for the control method of the capsule robot in this article. It is established based on nonlinear viscoelasticity. The interaction force of the model consists of environmental resistance, viscous resistance and Coulomb friction. The parameters of the model are identified by experimental investigation. Different methods are used in the experiment to obtain different values of the same parameter at different velocities. The model is proved to be valid by experimental verification. The achievement in this article is the attempted perfection of an interaction model. It is hoped that the model can optimize the control method of the capsule robot in the future.
Communication and knowledge sharing in human-robot interaction and learning from demonstration.
Koenig, Nathan; Takayama, Leila; Matarić, Maja
2010-01-01
Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need to be able to accomplish a wide range of tasks in diverse conditions. Learning these tasks from demonstrations offers a convenient mechanism to customize and train a robot by transferring task related knowledge from a user to a robot. This avoids the time-consuming and complex process of manual programming. The way in which the user interacts with a robot during a demonstration plays a vital role in terms of how effectively and accurately the user is able to provide a demonstration. Teaching through demonstrations is a social activity, one that requires bidirectional communication between a teacher and a student. The work described in this paper studies how the user's visual observation of the robot and the robot's auditory cues affect the user's ability to teach the robot in a social setting. Results show that auditory cues provide important knowledge about the robot's internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot's movements. Copyright © 2010. Published by Elsevier Ltd.
How to get the best from robotic thoracic surgery.
Ricciardi, Sara; Zirafa, Carmelina Cristina; Davini, Federico; Melfi, Franca
2018-04-01
The application of Robotic technology in thoracic surgery has become widespread in the last decades. Thanks to its advanced features, the robotic system allows to perform a broad range of complex operations safely and in a comfortable way, with valuable advantages related to low invasiveness. Regarding lung tumours, several studies have shown the benefits of robotic surgery including lower blood loss and improved lymph node removal when compared with other minimally invasive techniques. Moreover, the robotic instruments allow to reach deep and narrow spaces permitting safe and precise removal of tumours located in remote areas, such as retrosternal and posterior mediastinal spaces with outstanding postoperative and oncological results. One controversial finding about the application of robotic system is its high capital and running costs. For this reason, a limited number of centres worldwide are able to employ this groundbreaking technology and there are limited possibilities for the trainees to acquire the necessary skills in robotic surgery. Therefore, a training programme based on three steps of learning, associated with a solid surgical background and a consistent operating activity, are required to obtain effective results. Putting this highest technological innovation in the hand of expert surgeons we can assure safe and effective procedures getting the best from robotic thoracic surgery.
Can Robotic Interaction Improve Joint Attention Skills?
Zheng, Zhi; Swanson, Amy R.; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A.; Weitlauf, Amy F.; Sarkar, Nilanjan
2013-01-01
Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms. PMID:24014194
Can Robotic Interaction Improve Joint Attention Skills?
Warren, Zachary E; Zheng, Zhi; Swanson, Amy R; Bekele, Esubalew; Zhang, Lian; Crittendon, Julie A; Weitlauf, Amy F; Sarkar, Nilanjan
2015-11-01
Although it has often been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorder (ASD), relatively few investigations have indexed the impact of intervention and feedback approaches. This pilot study investigated the application of a novel robotic interaction system capable of administering and adjusting joint attention prompts to a small group (n = 6) of children with ASD. Across a series of four sessions, children improved in their ability to orient to prompts administered by the robotic system and continued to display strong attention toward the humanoid robot over time. The results highlight both potential benefits of robotic systems for directed intervention approaches as well as potent limitations of existing humanoid robotic platforms.
A Force-Sensing System on Legs for Biomimetic Hexapod Robots Interacting with Unstructured Terrain
Wu, Rui; Li, Changle; Zang, Xizhe; Zhang, Xuehe; Jin, Hongzhe; Zhao, Jie
2017-01-01
The tiger beetle can maintain its stability by controlling the interaction force between its legs and an unstructured terrain while it runs. The biomimetic hexapod robot mimics a tiger beetle, and a comprehensive force sensing system combined with certain algorithms can provide force information that can help the robot understand the unstructured terrain that it interacts with. This study introduces a complicated leg force sensing system for a hexapod robot that is the same for all six legs. First, the layout and configuration of sensing system are designed according to the structure and sizes of legs. Second, the joint toque sensors, 3-DOF foot-end force sensor and force information processing module are designed, and the force sensor performance parameters are tested by simulations and experiments. Moreover, a force sensing system is implemented within the robot control architecture. Finally, the experimental evaluation of the leg force sensor system on the hexapod robot is discussed and the performance of the leg force sensor system is verified. PMID:28654003
An Exploratory Investigation into the Effects of Adaptation in Child-Robot Interaction
NASA Astrophysics Data System (ADS)
Salter, Tamie; Michaud, François; Létourneau, Dominic
The work presented in this paper describes an exploratory investigation into the potential effects of a robot exhibiting an adaptive behaviour in reaction to a child’s interaction. In our laboratory we develop robotic devices for a diverse range of children that differ in age, gender and ability, which includes children that are diagnosed with cognitive difficulties. As all children vary in their personalities and styles of interaction, it would follow that adaptation could bring many benefits. In this abstract we give our initial examination of a series of trials which explore the effects of a fully autonomous rolling robot exhibiting adaptation (through changes in motion and sound) compared to it exhibiting pre-programmed behaviours. We investigate sensor readings on-board the robot that record the level of ‘interaction’ that the robot receives when a child plays with it and also we discuss the results from analysing video footage looking at the social aspect of the trial.
A Case Study of Collaboration with Multi-Robots and Its Effect on Children's Interaction
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Wu, Sheng-Yi
2014-01-01
Learning how to carry out collaborative tasks is critical to the development of a student's capacity for social interaction. In this study, a multi-robot system was designed for students. In three different scenarios, students controlled robots in order to move dice; we then examined their collaborative strategies and their behavioral…
ERIC Educational Resources Information Center
Dunst, Carl J.; Trivette, Carol M.; Hamby, Deborah W.; Prior, Jeremy; Derryberry, Graham
2013-01-01
Findings from two studies investigating the effects of a socially interactive robot on the vocalization production of young children with disabilities are reported. The two studies included seven children with autism, two children with Down syndrome, and two children with attention deficit disorders. The Language ENvironment Analysis (LENA)…
Model-based safety analysis of human-robot interactions: the MIRAS walking assistance robot.
Guiochet, Jérémie; Hoang, Quynh Anh Do; Kaaniche, Mohamed; Powell, David
2013-06-01
Robotic systems have to cope with various execution environments while guaranteeing safety, and in particular when they interact with humans during rehabilitation tasks. These systems are often critical since their failure can lead to human injury or even death. However, such systems are difficult to validate due to their high complexity and the fact that they operate within complex, variable and uncertain environments (including users), in which it is difficult to foresee all possible system behaviors. Because of the complexity of human-robot interactions, rigorous and systematic approaches are needed to assist the developers in the identification of significant threats and the implementation of efficient protection mechanisms, and in the elaboration of a sound argumentation to justify the level of safety that can be achieved by the system. For threat identification, we propose a method called HAZOP-UML based on a risk analysis technique adapted to system description models, focusing on human-robot interaction models. The output of this step is then injected in a structured safety argumentation using the GSN graphical notation. Those approaches have been successfully applied to the development of a walking assistant robot which is now in clinical validation.
Robots for better health and quality of life. | NIH MedlinePlus the Magazine
... page please turn JavaScript on. Feature: Robotic Innovations Robots for better health and quality of life. Past ... of Child Health and Human Development. A social-robot "buddy" for kids A preschooler interacts with a ...
Morimoto, Jun; Kawato, Mitsuo
2015-03-06
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Evolutionary Developmental Robotics: Improving Morphology and Control of Physical Robots.
Vujovic, Vuk; Rosendo, Andre; Brodbeck, Luzius; Iida, Fumiya
2017-01-01
Evolutionary algorithms have previously been applied to the design of morphology and control of robots. The design space for such tasks can be very complex, which can prevent evolution from efficiently discovering fit solutions. In this article we introduce an evolutionary-developmental (evo-devo) experiment with real-world robots. It allows robots to grow their leg size to simulate ontogenetic morphological changes, and this is the first time that such an experiment has been performed in the physical world. To test diverse robot morphologies, robot legs of variable shapes were generated during the evolutionary process and autonomously built using additive fabrication. We present two cases with evo-devo experiments and one with evolution, and we hypothesize that the addition of a developmental stage can be used within robotics to improve performance. Moreover, our results show that a nonlinear system-environment interaction exists, which explains the nontrivial locomotion patterns observed. In the future, robots will be present in our daily lives, and this work introduces for the first time physical robots that evolve and grow while interacting with the environment.
Creating the brain and interacting with the brain: an integrated approach to understanding the brain
Morimoto, Jun; Kawato, Mitsuo
2015-01-01
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568
NASA Technical Reports Server (NTRS)
Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer
2011-01-01
Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.
In good company? Perception of movement synchrony of a non-anthropomorphic robot.
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot's likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants' perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.
Using a robot to personalise health education for children with diabetes type 1: a pilot study.
Blanson Henkemans, Olivier A; Bierman, Bert P B; Janssen, Joris; Neerincx, Mark A; Looije, Rosemarijn; van der Bosch, Hanneke; van der Giessen, Jeanine A M
2013-08-01
Assess the effects of personalised robot behaviours on the enjoyment and motivation of children (8-12) with diabetes, and on their acquisition of health knowledge, in educational play. Children (N=5) played diabetes quizzes against a personal or neutral robot on three occasions: once at the clinic, twice at home. The personal robot asked them about their names, sports and favourite colours, referred to these data during the interaction, and engaged in small talk. Fun, motivation and diabetes knowledge was measured. Child-robot interaction was observed. Children said the robot and quiz were fun, but this appreciation declined over time. With the personal robot, the children looked more at the robot and spoke more. The children mimicked the robot. Finally, an increase in knowledge about diabetes was observed. The study provides strong indication for how a personal robot can help children to improve health literacy in an enjoyable way. Children mimic the robot. When the robot is personal, they follow suit. Our results are positive and establish a good foundation for further development and testing in a larger study. Using a robot in health care could contribute to self-management in children and help them to cope with their illness. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Ueyama, Yuki
2015-01-01
One of the core features of autism spectrum disorder (ASD) is impaired reciprocal social interaction, especially in processing emotional information. Social robots are used to encourage children with ASD to take the initiative and to interact with the robotic tools to stimulate emotional responses. However, the existing evidence is limited by poor trial designs. The purpose of this study was to provide computational evidence in support of robot-assisted therapy for children with ASD. We thus propose an emotional model of ASD that adapts a Bayesian model of the uncanny valley effect, which holds that a human-looking robot can provoke repulsion and sensations of eeriness. Based on the unique emotional responses of children with ASD to the robots, we postulate that ASD induces a unique emotional response curve, more like a cliff than a valley. Thus, we performed numerical simulations of robot-assisted therapy to evaluate its effects. The results showed that, although a stimulus fell into the uncanny valley in the typical condition, it was effective at avoiding the uncanny cliff in the ASD condition. Consequently, individuals with ASD may find it more comfortable, and may modify their emotional response, if the robots look like deformed humans, even if they appear “creepy” to typical individuals. Therefore, we suggest that our model explains the effects of robot-assisted therapy in children with ASD and that human-looking robots may have potential advantages for improving social interactions in ASD. PMID:26389805
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
Maneuvering and control of flexible space robots
NASA Technical Reports Server (NTRS)
Meirovitch, Leonard; Lim, Seungchul
1994-01-01
This paper is concerned with a flexible space robot capable of maneuvering payloads. The robot is assumed to consist of two hinge-connected flexible arms and a rigid end-effector holding a payload; the robot is mounted on a rigid platform floating in space. The equations of motion are nonlinear and of high order. Based on the assumption that the maneuvering motions are one order of magnitude larger than the elastic vibrations, a perturbation approach permits design of controls for the two types of motion separately. The rigid-body maneuvering is carried out open loop, but the elastic motions are controlled closed loop, by means of discrete-time linear quadratic regulator theory with prescribed degree of stability. A numerical example demonstrates the approach. In the example, the controls derived by the perturbation approach are applied to the original nonlinear system and errors are found to be relatively small.
Advanced mechanisms for robotics
NASA Technical Reports Server (NTRS)
Vranish, John M.
1992-01-01
An overview of applied research and development at NASA-Goddard (GSFC) on mechanisms and the collision avoidance skin for robots is presented. First the work on robot end effectors is outlined, followed by a brief discussion on robot-friendly payload latching mechanisms and compliant joints. This, in turn, is followed by the collision avoidance/management skin and the GSFC research on magnetostrictive direct drive motors. Finally, a new project, the artificial muscle, is introduced. Each of the devices is described in sufficient detail to permit a basic understanding of its purpose, fundamental principles of operation, and capabilities. In addition, the development status of each is reported along with descriptions of breadboards and prototypes and their test results. In each case, the implications of the research for commercialization is discussed. The chronology of the presentation will give a clear idea of both the evolution of the R&D in recent years and its likely direction in the future.
Ghost-in-the-Machine reveals human social signals for human-robot interaction.
Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P
2015-01-01
We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.
Ghost-in-the-Machine reveals human social signals for human–robot interaction
Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P.
2015-01-01
We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience. PMID:26582998
Human motion behavior while interacting with an industrial robot.
Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus
2012-01-01
Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.
NASA Astrophysics Data System (ADS)
Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon
2016-07-01
With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.
Warren, Zachary; Muramatsu, Taro; Yoshikawa, Yuichiro; Matsumoto, Yoshio; Miyao, Masutomo; Nakano, Mitsuko; Mizushima, Sakae; Wakita, Yujin; Ishiguro, Hiroshi; Mimura, Masaru; Minabe, Yoshio; Kikuchi, Mitsuru
2017-01-01
Recent rapid technological advances have enabled robots to fulfill a variety of human-like functions, leading researchers to propose the use of such technology for the development and subsequent validation of interventions for individuals with autism spectrum disorder (ASD). Although a variety of robots have been proposed as possible therapeutic tools, the physical appearances of humanoid robots currently used in therapy with these patients are highly varied. Very little is known about how these varied designs are experienced by individuals with ASD. In this study, we systematically evaluated preferences regarding robot appearance in a group of 16 individuals with ASD (ages 10–17). Our data suggest that there may be important differences in preference for different types of robots that vary according to interaction type for individuals with ASD. Specifically, within our pilot sample, children with higher-levels of reported ASD symptomatology reported a preference for specific humanoid robots to those perceived as more mechanical or mascot-like. The findings of this pilot study suggest that preferences and reactions to robotic interactions may vary tremendously across individuals with ASD. Future work should evaluate how such differences may be systematically measured and potentially harnessed to facilitate meaningful interactive and intervention paradigms. PMID:29028837
AIonAI: a humanitarian law of artificial intelligence and robotics.
Ashrafian, Hutan
2015-02-01
The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human-robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot-robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
3D force control for robotic-assisted beating heart surgery based on viscoelastic tissue model.
Liu, Chao; Moreira, Pedro; Zemiti, Nabil; Poignet, Philippe
2011-01-01
Current cardiac surgery faces the challenging problem of heart beating motion even with the help of mechanical stabilizer which makes delicate operation on the heart surface difficult. Motion compensation methods for robotic-assisted beating heart surgery have been proposed recently in literature, but research on force control for such kind of surgery has hardly been reported. Moreover, the viscoelasticity property of the interaction between organ tissue and robotic instrument further complicates the force control design which is much easier in other applications by assuming the interaction model to be elastic (industry, stiff object manipulation, etc.). In this work, we present a three-dimensional force control method for robotic-assisted beating heart surgery taking into consideration of the viscoelastic interaction property. Performance studies based on our D2M2 robot and 3D heart beating motion information obtained through Da Vinci™ system are provided.
Towards a multilevel cognitive probabilistic representation of space
NASA Astrophysics Data System (ADS)
Tapus, Adriana; Vasudevan, Shrihari; Siegwart, Roland
2005-03-01
This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.
Robotics in biomedical chromatography and electrophoresis.
Fouda, H G
1989-08-11
The ideal laboratory robot can be viewed as "an indefatigable assistant capable of working continuously for 24 h a day with constant efficiency". The development of a system approaching that promise requires considerable skill and time commitment, a thorough understanding of the capabilities and limitations of the robot and its specialized modules and an intimate knowledge of the functions to be automated. The robot need not emulate every manual step. Effective substitutes for difficult steps must be devised. The future of laboratory robots depends not only on technological advances in other fields, but also on the skill and creativity of chromatographers and other scientists. The robot has been applied to automate numerous biomedical chromatography and electrophoresis methods. The quality of its data can approach, and in some cases exceed, that of manual methods. Maintaining high data quality during continuous operation requires frequent maintenance and validation. Well designed robotic systems can yield substantial increase in the laboratory productivity without a corresponding increase in manpower. They can free skilled personnel from mundane tasks and can enhance the safety of the laboratory environment. The integration of robotics, chromatography systems and laboratory information management systems permits full automation and affords opportunities for unattended method development and for future incorporation of artificial intelligence techniques and the evolution of expert systems. Finally, humanoid attributes aside, robotic utilization in the laboratory should not be an end in itself. The robot is a useful tool that should be utilized only when it is prudent and cost-effective to do so.
How do walkers avoid a mobile robot crossing their way?
Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien
2017-01-01
Robots and Humans have to share the same environment more and more often. In the aim of steering robots in a safe and convenient manner among humans it is required to understand how humans interact with them. This work focuses on collision avoidance between a human and a robot during locomotion. Having in mind previous results on human obstacle avoidance, as well as the description of the main principles which guide collision avoidance strategies, we observe how humans adapt a goal-directed locomotion task when they have to interfere with a mobile robot. Our results show differences in the strategy set by humans to avoid a robot in comparison with avoiding another human. Humans prefer to give the way to the robot even when they are likely to pass first at the beginning of the interaction. Copyright © 2016 Elsevier B.V. All rights reserved.
Jones, Raya A
2017-08-01
Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.
Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD.
Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan
2015-12-01
Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and feedback. The system was designed to incorporate both a humanoid robot and a human examiner. We compared child performance within system across these conditions in a sample of preschool children with ASD (n = 8) and a control sample of typically developing children (n = 8). The system was well-tolerated in the sample, children with ASD exhibited greater attention to the robotic system than the human administrator, and for children with ASD imitation performance appeared superior during the robotic interaction.
Roberts, Luke; Park, Hae Won; Howard, Ayanna M
2012-01-01
Rehabilitation robots in home environments has the potential to dramatically improve quality of life for individuals who experience disabling circumstances due to injury or chronic health conditions. Unfortunately, although classes of robotic systems for rehabilitation exist, these devices are typically not designed for children. And since over 150 million children in the world live with a disability, this causes a unique challenge for deploying such robotics for this target demographic. To overcome this barrier, we discuss a system that uses a wireless arm glove input device to enable interaction with a robotic playmate during various play scenarios. Results from testing the system with 20 human subjects shows that the system has potential, but certain aspects need to be improved before deployment with children.
Kaboski, Juhi R; Diehl, Joshua John; Beriont, Jane; Crowell, Charles R; Villano, Michael; Wier, Kristin; Tang, Karen
2015-12-01
This pilot study evaluated a novel intervention designed to reduce social anxiety and improve social/vocational skills for adolescents with autism spectrum disorder (ASD). The intervention utilized a shared interest in robotics among participants to facilitate natural social interaction between individuals with ASD and typically developing (TD) peers. Eight individuals with ASD and eight TD peers ages 12-17 participated in a weeklong robotics camp, during which they learned robotic facts, actively programmed an interactive robot, and learned "career" skills. The ASD group showed a significant decrease in social anxiety and both groups showed an increase in robotics knowledge, although neither group showed a significant increase in social skills. These initial findings suggest that this approach is promising and warrants further study.
ERIC Educational Resources Information Center
Dunst, Carl J.; Hamby, Deborah W.; Trivette, Carol M.; Prior, Jeremy; Derryberry, Graham
2013-01-01
The effects of a socially interactive robot on the vocalization production of five children with disabilities (4 with autism, 1 with a sensory processing disorder) were the focus of the intervention study described in this research report. The interventions with each child were conducted over 4 or 5 days in the children's homes and involved…
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Basuki, I.; Asto B, I. G. P.; Anifah, L.
2018-01-01
The focus of the research is the teaching module which incorporates manufacturing, planning mechanical designing, controlling system through microprocessor technology and maneuverability of the robot. Computer interactive and computer-assisted learning is strategies that emphasize the use of computers and learning aids (computer assisted learning) in teaching and learning activity. This research applied the 4-D model research and development. The model is suggested by Thiagarajan, et.al (1974). 4-D Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with an objective to produce a tool of learning in the form of intelligent robot modules and kit based on Computer Interactive Learning and Computer Assisted Learning. From the data of the Indonesia Robot Contest during the period of 2009-2015, it can be seen that the modules that have been developed confirm the fourth stage of the research methods of development; disseminate method. The modules which have been developed for students guide students to produce Intelligent Robot Tool for Teaching Based on Computer Interactive Learning and Computer Assisted Learning. Results of students’ responses also showed a positive feedback to relate to the module of robotics and computer-based interactive learning.
Tan, Huan; Liang, Chen
2011-01-01
This paper proposes a conceptual hybrid cognitive architecture for cognitive robots to learn behaviors from demonstrations in robotic aid situations. Unlike the current cognitive architectures, this architecture puts concentration on the requirements of the safety, the interaction, and the non-centralized processing in robotic aid situations. Imitation learning technologies for cognitive robots have been integrated into this architecture for rapidly transferring the knowledge and skills between human teachers and robots.
In Good Company? Perception of Movement Synchrony of a Non-Anthropomorphic Robot
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot’s likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants’ perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot. PMID:26001025
Virtual spring damper method for nonholonomic robotic swarm self-organization and leader following
NASA Astrophysics Data System (ADS)
Wiech, Jakub; Eremeyev, Victor A.; Giorgio, Ivan
2018-04-01
In this paper, we demonstrate a method for self-organization and leader following of nonholonomic robotic swarm based on spring damper mesh. By self-organization of swarm robots we mean the emergence of order in a swarm as the result of interactions among the single robots. In other words the self-organization of swarm robots mimics some natural behavior of social animals like ants among others. The dynamics of two-wheel robot is derived, and a relation between virtual forces and robot control inputs is defined in order to establish stable swarm formation. Two cases of swarm control are analyzed. In the first case the swarm cohesion is achieved by virtual spring damper mesh connecting nearest neighboring robots without designated leader. In the second case we introduce a swarm leader interacting with nearest and second neighbors allowing the swarm to follow the leader. The paper ends with numeric simulation for performance evaluation of the proposed control method.
Role of expressive behaviour for robots that learn from people
Breazeal, Cynthia
2009-01-01
Robotics has traditionally focused on developing intelligent machines that can manipulate and interact with objects. The promise of personal robots, however, challenges researchers to develop socially intelligent robots that can collaborate with people to do things. In the future, robots are envisioned to assist people with a wide range of activities such as domestic chores, helping elders to live independently longer, serving a therapeutic role to help children with autism, assisting people undergoing physical rehabilitation and much more. Many of these activities shall require robots to learn new tasks, skills and individual preferences while ‘on the job’ from people with little expertise in the underlying technology. This paper identifies four key challenges in developing social robots that can learn from natural interpersonal interaction. The author highlights the important role that expressive behaviour plays in this process, drawing on examples from the past 8 years of her research group, the Personal Robots Group at the MIT Media Lab. PMID:19884147
Emergent of Burden Sharing of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Kusano, Takuya; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. Multi robots system is able to adapt to various circumstances and has a flexibility for variation of tasks. Robots are necessary that build a cooperative relations and acts as an organization to attain a purpose in multi robots system. Then, group behavior of insects which doesn't have advanced ability is observed. For example, ants called a sociality insect emerge systematic activities by the interaction with using a very simple way. Though ants make a communication with chemical matter, a human plans a communication by words and gestures. In this paper, we paid attention to the interaction based on psychological viewpoint. And a human's emotion model was used for the parameter which became a base of the motion planning of robots. These robots were made to do both-way action in test field with obstacle. As a result, a burden sharing like guide or carrier was seen even though those had a simple setup.
Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D
2017-12-01
The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
Representing and Learning Complex Object Interactions
Zhou, Yilun; Konidaris, George
2017-01-01
We present a framework for representing scenarios with complex object interactions, in which a robot cannot directly interact with the object it wishes to control, but must instead do so via intermediate objects. For example, a robot learning to drive a car can only indirectly change its pose, by rotating the steering wheel. We formalize such complex interactions as chains of Markov decision processes and show how they can be learned and used for control. We describe two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game, and using a hot water dispenser to heat a cup of water. PMID:28593181
Scalable fabric tactile sensor arrays for soft bodies
NASA Astrophysics Data System (ADS)
Day, Nathan; Penaloza, Jimmy; Santos, Veronica J.; Killpack, Marc D.
2018-06-01
Soft robots have the potential to transform the way robots interact with their environment. This is due to their low inertia and inherent ability to more safely interact with the world without damaging themselves or the people around them. However, existing sensing for soft robots has at least partially limited their ability to control interactions with their environment. Tactile sensors could enable soft robots to sense interaction, but most tactile sensors are made from rigid substrates and are not well suited to applications for soft robots which can deform. In addition, the benefit of being able to cheaply manufacture soft robots may be lost if the tactile sensors that cover them are expensive and their resolution does not scale well for manufacturability. This paper discusses the development of a method to make affordable, high-resolution, tactile sensor arrays (manufactured in rows and columns) that can be used for sensorizing soft robots and other soft bodies. However, the construction results in a sensor array that exhibits significant amounts of cross-talk when two taxels in the same row are compressed. Using the same fabric-based tactile sensor array construction design, two different methods for cross-talk compensation are presented. The first uses a mathematical model to calculate a change in resistance of each taxel directly. The second method introduces additional simple circuit components that enable us to isolate each taxel electrically and relate voltage to force directly. Fabric sensor arrays are demonstrated for two different soft-bodied applications: an inflatable single link robot and a human wrist.
Investigating the ability to read others' intentions using humanoid robots.
Sciutti, Alessandra; Ansuini, Caterina; Becchio, Cristina; Sandini, Giulio
2015-01-01
The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.
Development of a skin for intuitive interaction with an assistive robot.
Markham, Heather C; Brewer, Bambi R
2009-01-01
Assistive robots for persons with physical limitations need to interact with humans in a manner that is safe to the user and the environment. Early work in this field centered on task specific robots. Recent work has focused on the use of the MANUS ARM and the development of different interfaces. The most intuitive interaction with an object is through touch. By creating a skin for the robot arm which will directly control its movement compliance, we have developed a novel and intuitive method of interaction. This paper describes the development of a skin which acts as a switch. When activated through touch, the skin will put the arm into compliant mode allowing it to be moved to the desired location safely, and when released will put the robot into non-compliant mode thereby keeping it in place. We investigated four conductive materials and four insulators, selecting the best combination based on our design goals of the need for a continuous activation surface, the least amount of force required for skin activation, and the most consistent voltage change between the conductive surfaces measured during activation.
Empowering Older Patients to Engage in Self Care: Designing an Interactive Robotic Device
Tiwari, Priyadarshi; Warren, Jim; Day, Karen
2011-01-01
Objectives: To develop and test an interactive robot mounted computing device to support medication management as an example of a complex self-care task in older adults. Method: A Grounded Theory (GT), Participatory Design (PD) approach was used within three Action Research (AR) cycles to understand design requirements and test the design configuration addressing the unique task requirements. Results: At the end of the first cycle a conceptual framework was evolved. The second cycle informed architecture and interface design. By the end of third cycle residents successfully interacted with the dialogue system and were generally satisfied with the robot. The results informed further refinement of the prototype. Conclusion: An interactive, touch screen based, robot-mounted information tool can be developed to support healthcare needs of older people. Qualitative methods such as the hybrid GT-PD-AR approach may be particularly helpful for innovating and articulating design requirements in challenging situations. PMID:22195203
Empowering older patients to engage in self care: designing an interactive robotic device.
Tiwari, Priyadarshi; Warren, Jim; Day, Karen
2011-01-01
To develop and test an interactive robot mounted computing device to support medication management as an example of a complex self-care task in older adults. A Grounded Theory (GT), Participatory Design (PD) approach was used within three Action Research (AR) cycles to understand design requirements and test the design configuration addressing the unique task requirements. At the end of the first cycle a conceptual framework was evolved. The second cycle informed architecture and interface design. By the end of third cycle residents successfully interacted with the dialogue system and were generally satisfied with the robot. The results informed further refinement of the prototype. An interactive, touch screen based, robot-mounted information tool can be developed to support healthcare needs of older people. Qualitative methods such as the hybrid GT-PD-AR approach may be particularly helpful for innovating and articulating design requirements in challenging situations.
Space environments and their effects on space automation and robotics
NASA Technical Reports Server (NTRS)
Garrett, Henry B.
1990-01-01
Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.
ERIC Educational Resources Information Center
Dunst, Carl J.; Hamby, Deborah W.; Trivette, Carol M.; Prior, Jeremy; Derryberry, Graham
2013-01-01
The effects of a socially interactive robot on the conversational turns between four young children with autism and their mothers were investigated as part of the intervention study described in this research report. The interventions with each child were conducted over 4 or 5 days in the children's homes where a practitioner facilitated…
A tele-operated mobile ultrasound scanner using a light-weight robot.
Delgorge, Cécile; Courrèges, Fabien; Al Bassit, Lama; Novales, Cyril; Rosenberger, Christophe; Smith-Guerin, Natalie; Brù, Concepció; Gilabert, Rosa; Vannoni, Maurizio; Poisson, Gérard; Vieyres, Pierre
2005-03-01
This paper presents a new tele-operated robotic chain for real-time ultrasound image acquisition and medical diagnosis. This system has been developed in the frame of the Mobile Tele-Echography Using an Ultralight Robot European Project. A light-weight six degrees-of-freedom serial robot, with a remote center of motion, has been specially designed for this application. It holds and moves a real probe on a distant patient according to the expert gesture and permits an image acquisition using a standard ultrasound device. The combination of mechanical structure choice for the robot and dedicated control law, particularly nearby the singular configuration allows a good path following and a robotized gesture accuracy. The choice of compression techniques for image transmission enables a compromise between flow and quality. These combined approaches, for robotics and image processing, enable the medical specialist to better control the remote ultrasound probe holder system and to receive stable and good quality ultrasound images to make a diagnosis via any type of communication link from terrestrial to satellite. Clinical tests have been performed since April 2003. They used both satellite or Integrated Services Digital Network lines with a theoretical bandwidth of 384 Kb/s. They showed the tele-echography system helped to identify 66% of lesions and 83% of symptomatic pathologies.
On the Effectiveness of Robot-Assisted Language Learning
ERIC Educational Resources Information Center
Lee, Sungjin; Noh, Hyungjong; Lee, Jonghoon; Lee, Kyusong; Lee, Gary Geunbae; Sagong, Seongdae; Kim, Munsang
2011-01-01
This study introduces the educational assistant robots that we developed for foreign language learning and explores the effectiveness of robot-assisted language learning (RALL) which is in its early stages. To achieve this purpose, a course was designed in which students have meaningful interactions with intelligent robots in an immersive…
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko
2011-01-01
Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed studies, participants performance during teleoperation of a robot arm will be compared when they are provided with command-guidance symbology (that is, directing the operator what commands to make) or situation-guidance symbology (that is, providing natural cues so that the operator can infer what commands to make). The second factor for AR symbology is the effects of overlays that are either superimposed or integrated into the external view of the world. A study is proposed in which the effects of superimposed and integrated overlays on operator task performance during teleoperated driving tasks are compared
A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.
Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash
2017-07-01
This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.
Long-term knowledge acquisition using contextual information in a memory-inspired robot architecture
NASA Astrophysics Data System (ADS)
Pratama, Ferdian; Mastrogiovanni, Fulvio; Lee, Soon Geul; Chong, Nak Young
2017-03-01
In this paper, we present a novel cognitive framework allowing a robot to form memories of relevant traits of its perceptions and to recall them when necessary. The framework is based on two main principles: on the one hand, we propose an architecture inspired by current knowledge in human memory organisation; on the other hand, we integrate such an architecture with the notion of context, which is used to modulate the knowledge acquisition process when consolidating memories and forming new ones, as well as with the notion of familiarity, which is employed to retrieve proper memories given relevant cues. Although much research has been carried out, which exploits Machine Learning approaches to provide robots with internal models of their environment (including objects and occurring events therein), we argue that such approaches may not be the right direction to follow if a long-term, continuous knowledge acquisition is to be achieved. As a case study scenario, we focus on both robot-environment and human-robot interaction processes. In case of robot-environment interaction, a robot performs pick and place movements using the objects in the workspace, at the same time observing their displacement on a table in front of it, and progressively forms memories defined as relevant cues (e.g. colour, shape or relative position) in a context-aware fashion. As far as human-robot interaction is concerned, the robot can recall specific snapshots representing past events using both sensory information and contextual cues upon request by humans.
Collaboration by Design: Using Robotics to Foster Social Interaction in Kindergarten
ERIC Educational Resources Information Center
Lee, Kenneth T. H.; Sullivan, Amanda; Bers, Marina U.
2013-01-01
Research shows the importance of social interaction between peers in child development. Although technology can foster peer interactions, teachers often struggle with teaching with technology. This study examined a sample of (n = 19) children participating in a kindergarten robotics summer workshop to determine the effect of teaching using a…
Chemuturi, Radhika; Amirabdollahian, Farshid; Dautenhahn, Kerstin
2013-09-28
Rehabilitation robotics is progressing towards developing robots that can be used as advanced tools to augment the role of a therapist. These robots are capable of not only offering more frequent and more accessible therapies but also providing new insights into treatment effectiveness based on their ability to measure interaction parameters. A requirement for having more advanced therapies is to identify how robots can 'adapt' to each individual's needs at different stages of recovery. Hence, our research focused on developing an adaptive interface for the GENTLE/A rehabilitation system. The interface was based on a lead-lag performance model utilising the interaction between the human and the robot. The goal of the present study was to test the adaptability of the GENTLE/A system to the performance of the user. Point-to-point movements were executed using the HapticMaster (HM) robotic arm, the main component of the GENTLE/A rehabilitation system. The points were displayed as balls on the screen and some of the points also had a real object, providing a test-bed for the human-robot interaction (HRI) experiment. The HM was operated in various modes to test the adaptability of the GENTLE/A system based on the leading/lagging performance of the user. Thirty-two healthy participants took part in the experiment comprising of a training phase followed by the actual-performance phase. The leading or lagging role of the participant could be used successfully to adjust the duration required by that participant to execute point-to-point movements, in various modes of robot operation and under various conditions. The adaptability of the GENTLE/A system was clearly evident from the durations recorded. The regression results showed that the participants required lower execution times with the help from a real object when compared to just a virtual object. The 'reaching away' movements were longer to execute when compared to the 'returning towards' movements irrespective of the influence of the gravity on the direction of the movement. The GENTLE/A system was able to adapt so that the duration required to execute point-to-point movement was according to the leading or lagging performance of the user with respect to the robot. This adaptability could be useful in the clinical settings when stroke subjects interact with the system and could also serve as an assessment parameter across various interaction sessions. As the system adapts to user input, and as the task becomes easier through practice, the robot would auto-tune for more demanding and challenging interactions. The improvement in performance of the participants in an embedded environment when compared to a virtual environment also shows promise for clinical applicability, to be tested in due time. Studying the physiology of upper arm to understand the muscle groups involved, and their influence on various movements executed during this study forms a key part of our future work.
Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI
Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo
2008-01-01
Background When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. Methodology/Principal Findings By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer
Can machines think? Interaction and perspective taking with robots investigated via fMRI.
Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo
2008-07-09
When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer
Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.
de Greeff, Joachim; Belpaeme, Tony
2015-01-01
Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.
TRICCS: A proposed teleoperator/robot integrated command and control system for space applications
NASA Technical Reports Server (NTRS)
Will, R. W.
1985-01-01
Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.
Architecture for Multiple Interacting Robot Intelligences
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II (Inventor)
2008-01-01
An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a database associative memory (DBAM) that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.
Wood, Luke Jai; Dautenhahn, Kerstin; Rainer, Austen; Robins, Ben; Lehmann, Hagen; Syrdal, Dag Sverre
2013-01-01
Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications. PMID:23533625
Larriba, Ferran; Raya, Cristóbal; Angulo, Cecilio; Albo-Canals, Jordi; Díaz, Marta; Boldú, Roger
2016-07-15
This PATRICIA research project is about using pet robots to reduce pain and anxiety in hospitalized children. The study began 2 years ago and it is believed that the advances made in this project are significant. Patients, parents, nurses, psychologists, and engineers have adopted the Pleo robot, a baby dinosaur robotic pet, which works in different ways to assist children during hospitalization. Focus is spent on creating a wireless communication system with the Pleo in order to help the coordinator, who conducts therapy with the child, monitor, understand, and control Pleo's behavior at any moment. This article reports how this technological function is being developed and tested. Wireless communication between the Pleo and an Android device is achieved. The developed Android app allows the user to obtain any state of the robot without stopping its interaction with the patient. Moreover, information is sent to a cloud, so that robot moods, states and interactions can be shared among different robots. Pleo attachment was successful for more than 1 month, working with children in therapy, which makes the investment capable of positive therapeutic possibilities. This technical improvement in the Pleo addresses two key issues in social robotics: needing an enhanced response to maintain the attention and engagement of the child, and using the system as a platform to collect the states of the child's progress for clinical purposes.
Augmented Robotics Dialog System for Enhancing Human–Robot Interaction
Alonso-Martín, Fernando; Castro-González, Aívaro; de Gorostiza Luengo, Francisco Javier Fernandez; Salichs, Miguel Ángel
2015-01-01
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human–robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human–robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications. PMID:26151202
Augmented Robotics Dialog System for Enhancing Human-Robot Interaction.
Alonso-Martín, Fernando; Castro-González, Aĺvaro; Luengo, Francisco Javier Fernandez de Gorostiza; Salichs, Miguel Ángel
2015-07-03
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human-robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human-robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications.
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
Broadbent, Elizabeth; Kumar, Vinayak; Li, Xingyan; Sollers, John; Stafford, Rebecca Q; MacDonald, Bruce A; Wegner, Daniel M
2013-01-01
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users' perceptions of the robot's personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot's mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot's mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot's face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot's personality. Designers should be aware that the face on a robot's display screen can affect both the perceived mind and personality of the robot.
An interactive control algorithm used for equilateral triangle formation with robotic sensors.
Li, Xiang; Chen, Hongcai
2014-04-22
This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs.
An Interactive Control Algorithm Used for Equilateral Triangle Formation with Robotic Sensors
Li, Xiang; Chen, Hongcai
2014-01-01
This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs. PMID:24759118
NASA Astrophysics Data System (ADS)
Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.
1997-12-01
This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.
NASA VERVE: Interactive 3D Visualization Within Eclipse
NASA Technical Reports Server (NTRS)
Cohen, Tamar; Allan, Mark B.
2014-01-01
At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.
A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction
2011-10-01
directly affects the willingness of people to accept robot -produced information, follow robots ’ suggestions, and thus benefit from the advantages inherent...perceived complexity of operation). Consequently, if the perceived risk of using the robot exceeds its perceived benefit , practical operators almost...necessary presence of a human caregiver (Graf, Hans, & Schraft, 2004). Other robotic devices, such as wheelchairs (Yanco, 2001) and exoskeletons (e.g
Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study
Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna
2018-01-01
Background Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. Objective The aim of this study was to explore participants’ qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. Methods NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot’s head sensor. Evaluations were content-analyzed utilizing Boyatzis’ steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Results Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Conclusions Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. PMID:29724701
Incorporating a Robot into an Autism Therapy Team
2012-04-01
with autism spectrum disorder. social interactions. Furthermore, about 50 percent of children identified with ASD present with insufficient...engagement with a robot is not a goal but rather a means for helping such children Autism spectrum disorder (ASD) refers to a group of pervasive develop...therapeutic role as toys for chil- dren with autism .9 She observed that • children wanted to interact with the robot for 10 minutes or more, • children were
Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1997-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Gerłowska, Justyna; Skrobas, Urszula; Grabowska-Aleksandrowicz, Katarzyna; Korchut, Agnieszka; Szklener, Sebastian; Szczęśniak-Stańczyk, Dorota; Tzovaras, Dimitrios; Rejdak, Konrad
2018-01-01
The aim of the present study is to present the results of the assessment of clinical application of the robotic assistant for patients suffering from mild cognitive impairments (MCI) and Alzheimer Disease (AD). The human-robot interaction (HRI) evaluation approach taken within the study is a novelty in the field of social robotics. The proposed assessment of the robotic functionalities are based on end-user perception of attractiveness, usability and potential societal impact of the device. The methods of evaluation applied consist of User Experience Questionnaire (UEQ), AttrakDiff and the societal impact inventory tailored for the project purposes. The prototype version of the Robotic Assistant for MCI patients at Home (RAMCIP) was tested in a semi-controlled environment at the Department of Neurology (Lublin, Poland). Eighteen elderly participants, 10 healthy and 8 MCI, performed everyday tasks and functions facilitated by RAMCIP. The tasks consisted of semi-structuralized scenarios like: medication intake, hazardous events prevention, and social interaction. No differences between the groups of subjects were observed in terms of perceived attractiveness, usability nor-societal impact of the device. The robotic assistant societal impact and attractiveness were highly assessed. The usability of the device was reported as neutral due to the short time of interaction.
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
NASA Technical Reports Server (NTRS)
Garrahan, Steven L.; Tolson, Robert H.; Williams, Robert L., II
1995-01-01
Industrial robots are usually attached to a rigid base. Placing the robot on a compliant base introduces dynamic coupling between the two systems. The Vehicle Emulation System (VES) is a six DOF platform that is capable of modeling this interaction. The VES employs a force-torque sensor as the interface between robot and base. A computer simulation of the VES is presented. Each of the hardware and software components is described and Simulink is used as the programming environment. The simulation performance is compared with experimental results to validate accuracy. A second simulation which models the dynamic interaction of a robot and a flexible base acts as a comparison to the simulated motion of the VES. Results are presented that compare the simulated VES motion with the motion of the VES hardware using the same admittance model. The two computer simulations are compared to determine how well the VES is expected to emulate the desired motion. Simulation results are given for robots mounted to the end effector of the Space Shuttle Remote Manipulator System (SRMS). It is shown that for fast motions of the two robots studied, the SRMS experiences disturbances on the order of centimeters. Larger disturbances are possible if different manipulators are used.
Gerłowska, Justyna; Skrobas, Urszula; Grabowska-Aleksandrowicz, Katarzyna; Korchut, Agnieszka; Szklener, Sebastian; Szczęśniak-Stańczyk, Dorota; Tzovaras, Dimitrios; Rejdak, Konrad
2018-01-01
The aim of the present study is to present the results of the assessment of clinical application of the robotic assistant for patients suffering from mild cognitive impairments (MCI) and Alzheimer Disease (AD). The human-robot interaction (HRI) evaluation approach taken within the study is a novelty in the field of social robotics. The proposed assessment of the robotic functionalities are based on end-user perception of attractiveness, usability and potential societal impact of the device. The methods of evaluation applied consist of User Experience Questionnaire (UEQ), AttrakDiff and the societal impact inventory tailored for the project purposes. The prototype version of the Robotic Assistant for MCI patients at Home (RAMCIP) was tested in a semi-controlled environment at the Department of Neurology (Lublin, Poland). Eighteen elderly participants, 10 healthy and 8 MCI, performed everyday tasks and functions facilitated by RAMCIP. The tasks consisted of semi-structuralized scenarios like: medication intake, hazardous events prevention, and social interaction. No differences between the groups of subjects were observed in terms of perceived attractiveness, usability nor-societal impact of the device. The robotic assistant societal impact and attractiveness were highly assessed. The usability of the device was reported as neutral due to the short time of interaction.
Singularity now: using the ventricular assist device as a model for future human-robotic physiology.
Martin, Archer K
2016-04-01
In our 21 st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today's world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named "IshBot") can best be studied in ventricular assist devices - VAD.
Singularity now: using the ventricular assist device as a model for future human-robotic physiology
Martin, Archer K.
2016-01-01
In our 21st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today’s world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named “IshBot”) can best be studied in ventricular assist devices – VAD. PMID:28913480
Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action
Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra
2014-01-01
Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212
The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction
2009-03-01
distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita
Comani, Silvia; Schinaia, Lorenzo; Tamburro, Gabriella; Velluto, Lucia; Sorbi, Sandro; Conforto, Silvia; Guarnieri, Biancamaria
2015-01-01
One post-stroke patient underwent neuro-motor rehabilitation of one upper limb with a novel system combining a passive robotic device, Virtual Reality training applications and high resolution electroencephalography (HR-EEG). The outcome of the clinical tests and the evaluation of the kinematic parameters recorded with the robotic device concurred to highlight an improved motor recovery of the impaired limb despite the age of the patient, his compromised motor function, and the start of rehabilitation at the 3rd week post stroke. The time frequency and functional source analysis of the HR-EEG signals permitted to quantify the functional changes occurring in the brain in association with the rehabilitation motor tasks, and to highlight the recovery of the neuro-motor function.
Virtual Presence: One Step Beyond Reality
NASA Technical Reports Server (NTRS)
Budden, Nancy Ann
1997-01-01
Our primary objective was to team up a group consisting of scientists and engineers from two different NASA cultures, and simulate an interactive teleoperated robot conducting geologic field work on the Moon or Mars. The information derived from the experiment will benefit both the robotics team and the planetary exploration team in the areas of robot design and development, and mission planning and analysis. The Earth Sciences and Space and Life Sciences Division combines the past with the future contributing experience from Apollo crews exploring the lunar surface, knowledge of reduced gravity environments, the performance limits of EVA suits, and future goals for human exploration beyond low Earth orbit. The Automation, Robotics. and Simulation Division brings to the table the technical expertise of robotic systems, the future goals of highly interactive robotic capabilities, treading on the edge of technology by joining for the first time a unique combination of telepresence with virtual reality.
Thepsoonthorn, Chidchanok; Ogawa, Ken-Ichiro; Miyake, Yoshihiro
2018-05-30
At current state, although robotics technology has been immensely developed, the uncertainty to completely engage in human-robot interaction is still growing among people. Many current studies then started to concern about human factors that might influence human's likability like human's personality, and found that compatibility between human's and robot's personality (expressions of personality characteristics) can enhance human's likability. However, it is still unclear whether specific means and strategy of robot's nonverbal behaviours enhances likability from human with different personality traits and whether there is a relationship between robot's nonverbal behaviours and human's likability based on human's personality. In this study, we investigated and focused on the interaction via gaze and head nodding behaviours (mutual gaze convergence and head nodding synchrony) between introvert/extravert participants and robot in two communication strategies (Backchanneling and Turn-taking). Our findings reveal that the introvert participants are positively affected by backchanneling in robot's head nodding behaviour, which results in substantial head nodding synchrony whereas the extravert participants are positively influenced by turn-taking in gaze behaviour, which leads to significant mutual gaze convergence. This study demonstrates that there is a relationship between robot's nonverbal behaviour and human's likability based on human's personality.
Suitability of healthcare robots for a dementia unit and suggested improvements.
Robinson, Hayley; MacDonald, Bruce A; Kerse, Ngaire; Broadbent, Elizabeth
2013-01-01
To investigate the suitability of a new eldercare robot (Guide) for people with dementia and their caregivers compared with one that has been successfully used before (Paro), and to generate suggestions for improved robot enhanced dementia care. Cross-sectional study. A researcher demonstrated both robots in a random order to each staff member alone, or to each resident together with his/her relative(s). The researcher encouraged the participants to interact with each robot and asked staff and relatives a series of open ended questions about each robot. A secure dementia residential facility in Auckland, New Zealand. Ten people with dementia and 11 of their relatives, and five staff members. Each robot interaction was video-taped and coded for the number of times the resident looked at, smiled, touched, and talked to and about each robot, as well as relative interactions with the resident. Qualitative analysis was used to code the open ended questions. Residents smiled, touched and talked to Paro significantly more than Guide. Paro was found to be more acceptable to family members, staff, and residents, although many acknowledged that Guide had the potential to be useful if adapted for this population in terms of ergonomics and simplification. Healthcare robots in dementia settings have to be simple and easy to use as well as stimulating and entertaining. This research highlights how eldercare robots may be adapted to have the best effects in dementia settings. It is concluded that Paro's sounds could be modified to be more acceptable to this population. The ergonomic design of Guide could be reviewed and the software application could be simplified and targeted to people with dementia. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.
Cognitive patterns: giving autonomy some context
NASA Astrophysics Data System (ADS)
Dumond, Danielle; Stacy, Webb; Geyer, Alexandra; Rousseau, Jeffrey; Therrien, Mike
2013-05-01
Today's robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low, detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience and, therefore, do not become better with time or apply previous knowledge to new situations. With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor management while operating single robots, are also required to manage inter-robot interactions. To make the most use of robots in combat environments, it will be necessary to have the capability to assign them new missions (including providing them context information), and to have them report information about the environment they encounter as they proceed with their mission. The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components: Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns, match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator to focus on things other than the operation of the robot(s).
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
Parisi, Domenico
2010-01-01
Trying to understand human language by constructing robots that have language necessarily implies an embodied view of language, where the meaning of linguistic expressions is derived from the physical interactions of the organism with the environment. The paper describes a neural model of language according to which the robot's behaviour is controlled by a neural network composed of two sub-networks, one dedicated to the non-linguistic interactions of the robot with the environment and the other one to processing linguistic input and producing linguistic output. We present the results of a number of simulations using the model and we suggest how the model can be used to account for various language-related phenomena such as disambiguation, the metaphorical use of words, the pervasive idiomaticity of multi-word expressions, and mental life as talking to oneself. The model implies a view of the meaning of words and multi-word expressions as a temporal process that takes place in the entire brain and has no clearly defined boundaries. The model can also be extended to emotional words if we assume that an embodied view of language includes not only the interactions of the robot's brain with the external environment but also the interactions of the brain with what is inside the body.
Portraits of self-organization in fish schools interacting with robots
NASA Astrophysics Data System (ADS)
Aureli, M.; Fiorilli, F.; Porfiri, M.
2012-05-01
In this paper, we propose an enabling computational and theoretical framework for the analysis of experimental instances of collective behavior in response to external stimuli. In particular, this work addresses the characterization of aggregation and interaction phenomena in robot-animal groups through the exemplary analysis of fish schooling in the vicinity of a biomimetic robot. We adapt global observables from statistical mechanics to capture the main features of the shoal collective motion and its response to the robot from experimental observations. We investigate the shoal behavior by using a diffusion mapping analysis performed on these global observables that also informs the definition of relevant portraits of self-organization.
Project InterActions: A Multigenerational Robotic Learning Environment
NASA Astrophysics Data System (ADS)
Bers, Marina U.
2007-12-01
This paper presents Project InterActions, a series of 5-week workshops in which very young learners (4- to 7-year-old children) and their parents come together to build and program a personally meaningful robotic project in the context of a multigenerational robotics-based community of practice. The goal of these family workshops is to teach both parents and children about the mechanical and programming aspects involved in robotics, as well as to initiate them in a learning trajectory with and about technology. Results from this project address different ways in which parents and children learn together and provide insights into how to develop educational interventions that would educate parents, as well as children, in new domains of knowledge and skills such as robotics and new technologies.
Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony
2010-01-01
This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503
Social cognitive neuroscience and humanoid robotics.
Chaminade, Thierry; Cheng, Gordon
2009-01-01
We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.
Coeckelbergh, Mark; Pop, Cristina; Simut, Ramona; Peca, Andreea; Pintea, Sebastian; David, Daniel; Vanderborght, Bram
2016-02-01
The use of robots in therapy for children with autism spectrum disorder (ASD) raises issues concerning the ethical and social acceptability of this technology and, more generally, about human-robot interaction. However, usually philosophical papers on the ethics of human-robot-interaction do not take into account stakeholders' views; yet it is important to involve stakeholders in order to render the research responsive to concerns within the autism and autism therapy community. To support responsible research and innovation in this field, this paper identifies a range of ethical, social and therapeutic concerns, and presents and discusses the results of an exploratory survey that investigated these issues and explored stakeholders' expectations about this kind of therapy. We conclude that although in general stakeholders approve of using robots in therapy for children with ASD, it is wise to avoid replacing therapists by robots and to develop and use robots that have what we call supervised autonomy. This is likely to create more trust among stakeholders and improve the quality of the therapy. Moreover, our research suggests that issues concerning the appearance of the robot need to be adequately dealt with by the researchers and therapists. For instance, our survey suggests that zoomorphic robots may be less problematic than robots that look too much like humans.
Research and development of service robot platform based on artificial psychology
NASA Astrophysics Data System (ADS)
Zhang, Xueyuan; Wang, Zhiliang; Wang, Fenhua; Nagai, Masatake
2007-12-01
Some related works about the control architecture of robot system are briefly summarized. According to the discussions above, this paper proposes control architecture of service robot based on artificial psychology. In this control architecture, the robot can obtain the cognition of environment through sensors, and then be handled with intelligent model, affective model and learning model, and finally express the reaction to the outside stimulation through its behavior. For better understanding the architecture, hierarchical structure is also discussed. The control system of robot can be divided into five layers, namely physical layer, drives layer, information-processing and behavior-programming layer, application layer and system inspection and control layer. This paper shows how to achieve system integration from hardware modules, software interface and fault diagnosis. Embedded system GENE-8310 is selected as the PC platform of robot APROS-I, and its primary memory media is CF card. The arms and body of the robot are constituted by 13 motors and some connecting fittings. Besides, the robot has a robot head with emotional facial expression, and the head has 13 DOFs. The emotional and intelligent model is one of the most important parts in human-machine interaction. In order to better simulate human emotion, an emotional interaction model for robot is proposed according to the theory of need levels of Maslom and mood information of Siminov. This architecture has already been used in our intelligent service robot.
Dickstein-Fischer, Laurie; Fischer, Gregory S
2014-01-01
It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.
Robots Spur Software That Lends a Hand
NASA Technical Reports Server (NTRS)
2014-01-01
While building a robot to assist astronauts in space, Johnson Space Center worked with partners to develop robot reasoning and interaction technology. The partners created Robonaut 1, which led to Robonaut 2, and the work also led to patents now held by Universal Robotics in Nashville, Tennessee. The NASA-derived technology is available for use in warehousing, mining, and more.
Antagonistic actuation and stiffness control in soft inflatable robots
NASA Astrophysics Data System (ADS)
Althoefer, Kaspar
2018-06-01
Soft robots promise solutions for a wide range of applications that cannot be achieved with traditional, rigid-component robots. A key challenge is the creation of robotic structures that can vary their stiffness at will, for example, by using antagonistic actuators, to optimize their interaction with the environment and be able to exert high forces.
A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot.
Kim, Min Su; Lee, Jae Geun; Kang, Soon Ju
2016-06-08
In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots "Pet-Bot". To implement this function in a robot, in this paper we introduce a self-organizing interaction and synchronization method between wearable devices and Pet-Bots. First, the Pet-Bot opportunistically identifies its owner without any human intervention, which means that the robot self-identifies the owner's approach on its own. Second, Pet-Bot's activity is synchronized with the owner's behavior. Lastly, the robot frequently encounters uncertain situations (e.g., when the robot goes ahead of the owner but meets a situation where it cannot make a decision, or the owner wants to stop the Pet-Bot synchronization mode to relax). In this case, we have adopted a gesture recognition function that uses a 3-D accelerometer in the wearable device. In order to achieve the interaction and synchronization in real-time, we use two wireless communication protocols: 125 kHz low-frequency (LF) and 2.4 GHz Bluetooth low energy (BLE). We conducted experiments using a prototype Pet-Bot and wearable devices to verify their motion recognition of and synchronization with humans in real-time. The results showed a guaranteed level of accuracy of at least 94%. A trajectory test was also performed to demonstrate the robot's control performance when following or leading a human in real-time.
Undersea applications of dexterous robotics
NASA Technical Reports Server (NTRS)
Gittleman, Mark M.
1994-01-01
The revolution and application of dexterous robotics in the undersea energy production industry and how this mature technology has affected planned SSF dexterous robotic tasks are examined. Undersea telerobotics, or Remotely Operated Vehicles (ROV's), have evolved in design and use since the mid-1970s. Originally developed to replace commercial divers for both planned and unplanned tasks, they are now most commonly used to perform planned robotic tasks in all phases of assembly, inspection, and maintenance of undersea structures and installations. To accomplish these tasks, the worksites, the tasks themselves, and the tools are now engineered with both the telerobot's and the diver's capabilities in mind. In many cases, this planning has permitted a reduction in telerobot system complexity and cost. The philosophies and design practices that have resulted in the successful incorporation of telerobotics into the highly competitive and cost conscious offshore production industry have been largely ignored in the space community. Cases where these philosophies have been adopted or may be successfully adopted in the near future are explored.
Bhatia, Parisha; Mohamed, Hossam Eldin; Kadi, Abida; Walvekar, Rohan R.
2015-01-01
Robot assisted thyroid surgery has been the latest advance in the evolution of thyroid surgery after endoscopy assisted procedures. The advantage of a superior field vision and technical advancements of robotic technology have permitted novel remote access (trans-axillary and retro-auricular) surgical approaches. Interestingly, several remote access surgical ports using robot surgical system and endoscopic technique have been customized to avoid the social stigma of a visible scar. Current literature has displayed their various advantages in terms of post-operative outcomes; however, the associated financial burden and also additional training and expertise necessary hinder its widespread adoption into endocrine surgery practices. These approaches offer excellent cosmesis, with a shorter learning curve and reduce discomfort to surgeons operating ergonomically through a robotic console. This review aims to provide details of various remote access techniques that are being offered for thyroid resection. Though these have been reported to be safe and feasible approaches for thyroid surgery, further evaluation for their efficacy still remains. PMID:26425450
Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study.
Galvão Gomes da Silva, Joana; Kavanagh, David J; Belpaeme, Tony; Taylor, Lloyd; Beeson, Konna; Andrade, Jackie
2018-05-03
Motivational interviewing is an effective intervention for supporting behavior change but traditionally depends on face-to-face dialogue with a human counselor. This study addressed a key challenge for the goal of developing social robotic motivational interviewers: creating an interview protocol, within the constraints of current artificial intelligence, which participants will find engaging and helpful. The aim of this study was to explore participants' qualitative experiences of a motivational interview delivered by a social robot, including their evaluation of usability of the robot during the interaction and its impact on their motivation. NAO robots are humanoid, child-sized social robots. We programmed a NAO robot with Choregraphe software to deliver a scripted motivational interview focused on increasing physical activity. The interview was designed to be comprehensible even without an empathetic response from the robot. Robot breathing and face-tracking functions were used to give an impression of attentiveness. A total of 20 participants took part in the robot-delivered motivational interview and evaluated it after 1 week by responding to a series of written open-ended questions. Each participant was left alone to speak aloud with the robot, advancing through a series of questions by tapping the robot's head sensor. Evaluations were content-analyzed utilizing Boyatzis' steps: (1) sampling and design, (2) developing themes and codes, and (3) validating and applying the codes. Themes focused on interaction with the robot, motivation, change in physical activity, and overall evaluation of the intervention. Participants found the instructions clear and the navigation easy to use. Most enjoyed the interaction but also found it was restricted by the lack of individualized response from the robot. Many positively appraised the nonjudgmental aspect of the interview and how it gave space to articulate their motivation for change. Some participants felt that the intervention increased their physical activity levels. Social robots can achieve a fundamental objective of motivational interviewing, encouraging participants to articulate their goals and dilemmas aloud. Because they are perceived as nonjudgmental, robots may have advantages over more humanoid avatars for delivering virtual support for behavioral change. ©Joana Galvão Gomes da Silva, David J Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, Jackie Andrade. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.05.2018.
Robinson, Hayley; MacDonald, Bruce; Broadbent, Elizabeth
2015-03-01
To investigate the effects of interacting with the companion robot, Paro, on blood pressure and heart rate of older people in a residential care facility. This study used a repeated measures design. Twenty-one residents in rest home and hospital level care had their blood pressure taken three times; before, during and after interacting with the seal robot. Four residents who did not interact with the robot were excluded from the final analysis (final n = 17). The final analysis found that systolic and diastolic blood pressure changed significantly over time as did heart rate. Planned comparisons revealed that systolic and diastolic blood pressure decreased significantly from baseline to when residents had Paro (systolic, P = 0.048; diastolic, P = 0.05). Diastolic blood pressure increased significantly after Paro was withdrawn (P = 0.03). Interacting with Paro has a physiological effect on cardiovascular measures, which is similar to findings with live animals. © 2013 ACOTA.
Dynamic photogrammetric calibration of industrial robots
NASA Astrophysics Data System (ADS)
Maas, Hans-Gerd
1997-07-01
Today's developments in industrial robots focus on aims like gain of flexibility, improvement of the interaction between robots and reduction of down-times. A very important method to achieve these goals are off-line programming techniques. In contrast to conventional teach-in-robot programming techniques, where sequences of actions are defined step-by- step via remote control on the real object, off-line programming techniques design complete robot (inter-)action programs in a CAD/CAM environment. This poses high requirements to the geometric accuracy of a robot. While the repeatability of robot poses in the teach-in mode is often better than 0.1 mm, the absolute pose accuracy potential of industrial robots is usually much worse due to tolerances, eccentricities, elasticities, play, wear-out, load, temperature and insufficient knowledge of model parameters for the transformation from poses into robot axis angles. This fact necessitates robot calibration techniques, including the formulation of a robot model describing kinematics and dynamics of the robot, and a measurement technique to provide reference data. Digital photogrammetry as an accurate, economic technique with realtime potential offers itself for this purpose. The paper analyzes the requirements posed to a measurement technique by industrial robot calibration tasks. After an overview on measurement techniques used for robot calibration purposes in the past, a photogrammetric robot calibration system based on off-the- shelf lowcost hardware components will be shown and results of pilot studies will be discussed. Besides aspects of accuracy, reliability and self-calibration in a fully automatic dynamic photogrammetric system, realtime capabilities are discussed. In the pilot studies, standard deviations of 0.05 - 0.25 mm in the three coordinate directions could be achieved over a robot work range of 1.7 X 1.5 X 1.0 m3. The realtime capabilities of the technique allow to go beyond kinematic robot calibration and perform dynamic robot calibration as well as photogrammetric on-line control of a robot in action.
Wu, Ya-Huei; Wrobel, Jérémy; Cornuet, Mélanie; Kerhervé, Hélène; Damnée, Souad; Rigaud, Anne-Sophie
2014-01-01
There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot. Six older adults with mild cognitive impairment (MCI) and five cognitively intact healthy (CIH) older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used. Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults' uneasiness with technology, feeling of stigmatization, and ethical/societal issues associated with robot use. It is important to destigmatize images of assistive robots to facilitate their acceptance. Universal design aiming to increase the market for and production of products that are usable by everyone (to the greatest extent possible) might help to destigmatize assistive devices.
Wu, Ya-Huei; Wrobel, Jérémy; Cornuet, Mélanie; Kerhervé, Hélène; Damnée, Souad; Rigaud, Anne-Sophie
2014-01-01
Background There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot. Subjects and methods Six older adults with mild cognitive impairment (MCI) and five cognitively intact healthy (CIH) older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used. Results Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults’ uneasiness with technology, feeling of stigmatization, and ethical/societal issues associated with robot use. Conclusion It is important to destigmatize images of assistive robots to facilitate their acceptance. Universal design aiming to increase the market for and production of products that are usable by everyone (to the greatest extent possible) might help to destigmatize assistive devices. PMID:24855349
Attitudes and reactions to a healthcare robot.
Broadbent, Elizabeth; Kuo, I Han; Lee, Yong In; Rabindran, Joel; Kerse, Ngaire; Stafford, Rebecca; MacDonald, Bruce A
2010-06-01
The use of robots in healthcare is a new concept. The public's perception and acceptance is not well understood. The objective was to investigate the perceptions and emotions toward the utilization of healthcare robots among individuals over 40 years of age, investigate factors contributing to acceptance, and evaluate differences in blood pressure checks taken by a robot and a medical student. Fifty-seven (n = 57) adults aged over 40 years and recruited from local general practitioner or gerontology group lists participated in two cross-sectional studies. The first was an open-ended questionnaire assessing perceptions of robots. In the second study, participants had their blood pressure taken by a medical student and by a robot. Patient comfort with each encounter, perceived accuracy of each measurement, and the quality of the patient interaction were studied in each case. Readings were compared by independent t-tests and regression analyses were conducted to predict quality ratings. Participants' perceptions about robots were influenced by their prior exposure to robots in literature or entertainment media. Participants saw many benefits and applications for healthcare robots, including simple medical procedures and physical assistance, but had some concerns about reliability, safety, and the loss of personal care. Blood pressure readings did not differ between the medical student and robot, but participants felt more comfortable with the medical student and saw the robot as less accurate. Although age and sex were not significant predictors, individuals who held more positive initial attitudes and emotions toward robots rated the robot interaction more favorably. Many people see robots as having benefits and applications in healthcare but some have concerns. Individual attitudes and emotions regarding robots in general are likely to influence future acceptance of their introduction into healthcare processes.
The ACE multi-user web-based Robotic Observatory Control System
NASA Astrophysics Data System (ADS)
Mack, P.
2003-05-01
We have developed an observatory control system that can be operated in interactive, remote or robotic modes. In interactive and remote mode the observer typically acquires the first object then creates a script through a window interface to complete observations for the rest of the night. The system closes early in the event of bad weather. In robotic mode observations are submitted ahead of time through a web-based interface. We present observations made with a 1.0-m telescope using these methods.
Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems
2015-05-19
algorithm based on Age-Fitness Pareto Optimization (AFPO) ([9]) with an additional user prefer- ence objective and a neural network-based user model, we...greater than 40, which is about 5 times further than any robot traveled in our experiments. 6 3.3 Methods The algorithm uses a client -server computational...architecture. The client here is an interactive pro- gram which takes a pair of controllers as input, simulates4 two copies of the robot with
Eizicovits, Danny; Edan, Yael; Tabak, Iris; Levy-Tzedek, Shelly
2018-01-01
Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. The timing of the participants' movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group's motivation to continue playing. We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement - an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.
Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction
de Greeff, Joachim; Belpaeme, Tony
2015-01-01
Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children’s social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a “mental model” of the robot, tailoring the tutoring to the robot’s performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot’s bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance. PMID:26422143
Line following using a two camera guidance system for a mobile robot
NASA Astrophysics Data System (ADS)
Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.
1996-10-01
Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.
Control of intelligent robots in space
NASA Technical Reports Server (NTRS)
Freund, E.; Buehler, CH.
1989-01-01
In view of space activities like International Space Station, Man-Tended-Free-Flyer (MTFF) and free flying platforms, the development of intelligent robotic systems is gaining increasing importance. The range of applications that have to be performed by robotic systems in space includes e.g., the execution of experiments in space laboratories, the service and maintenance of satellites and flying platforms, the support of automatic production processes or the assembly of large network structures. Some of these tasks will require the development of bi-armed or of multiple robotic systems including functional redundancy. For the development of robotic systems which are able to perform this variety of tasks a hierarchically structured modular concept of automation is required. This concept is characterized by high flexibility as well as by automatic specialization to the particular sequence of tasks that have to be performed. On the other hand it has to be designed such that the human operator can influence or guide the system on different levels of control supervision, and decision. This leads to requirements for the hardware and software concept which permit a range of application of the robotic systems from telemanipulation to autonomous operation. The realization of this goal requires strong efforts in the development of new methods, software and hardware concepts, and the integration into an automation concept.
Distributed multirobot sensing and tracking: a behavior-based approach
NASA Astrophysics Data System (ADS)
Parker, Lynne E.
1995-09-01
An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1992-01-01
The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.
Rogers, Wendy A.
2015-01-01
Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936
Prakash, Akanksha; Rogers, Wendy A
2015-04-01
Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.
Information theory and robotics meet to study predator-prey interactions
NASA Astrophysics Data System (ADS)
Neri, Daniele; Ruberto, Tommaso; Cord-Cruz, Gabrielle; Porfiri, Maurizio
2017-07-01
Transfer entropy holds promise to advance our understanding of animal behavior, by affording the identification of causal relationships that underlie animal interactions. A critical step toward the reliable implementation of this powerful information-theoretic concept entails the design of experiments in which causal relationships could be systematically controlled. Here, we put forward a robotics-based experimental approach to test the validity of transfer entropy in the study of predator-prey interactions. We investigate the behavioral response of zebrafish to a fear-evoking robotic stimulus, designed after the morpho-physiology of the red tiger oscar and actuated along preprogrammed trajectories. From the time series of the positions of the zebrafish and the robotic stimulus, we demonstrate that transfer entropy correctly identifies the influence of the stimulus on the focal subject. Building on this evidence, we apply transfer entropy to study the interactions between zebrafish and a live red tiger oscar. The analysis of transfer entropy reveals a change in the direction of the information flow, suggesting a mutual influence between the predator and the prey, where the predator adapts its strategy as a function of the movement of the prey, which, in turn, adjusts its escape as a function of the predator motion. Through the integration of information theory and robotics, this study posits a new approach to study predator-prey interactions in freshwater fish.
Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1998-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Coordinating a Team of Robots for Urban Reconnaisance
2010-11-01
Land Warfare Conference 2010 Brisbane November 2010 Coordinating a Team of Robots for Urban Reconnaisance Pradeep Ranganathan , Ryan...without inundating him with micro- management . Behavorial autonomy is also critical for the human operator to productively interact Figure 1: A...today’s systems, a human operator controls a single robot, micro- managing every action. This micro- management becomes impossible with more robots: in
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
Recent trends in humanoid robotics research: scientific background, applications, and implications.
Solis, Jorge; Takanishi, Atsuo
2010-11-01
Even though the market size is still small at this moment, applied fields of robots are gradually spreading from the manufacturing industry to the others as one of the important components to support an aging society. For this purpose, the research on human-robot interaction (HRI) has been an emerging topic of interest for both basic research and customer application. The studies are especially focused on behavioral and cognitive aspects of the interaction and the social contexts surrounding it. As a part of these studies, the term of "roboethics" has been introduced as an approach to discuss the potentialities and the limits of robots in relation to human beings. In this article, we describe the recent research trends on the field of humanoid robotics. Their principal applications and their possible impact are discussed.
Design and development of biomimetic quadruped robot for behavior studies of rats and mice.
Ishii, Hiroyuki; Masuda, Yuichi; Miyagishima, Syunsuke; Fumino, Shogo; Takanishi, Atsuo; Laschi, Cecilia; Mazzolai, Barbara; Mattoli, Virgilio; Dario, Paolo
2009-01-01
This paper presents the design and development of a novel biomimetic quadruped robot for behavior studies of rats and mice. Many studies have been performed using these animals for the purpose of understanding human mind in psychology, pharmacology and brain science. In these fields, several experiments on social interactions have been performed using rats as basic studies of mental disorders or social learning. However, some researchers mention that the experiments on social interactions using animals are poorly-reproducible. Therefore, we consider that reproducibility of these experiments can be improved by using a robotic agent that interacts with an animal subject. Thus, we developed a small quadruped robot WR-2 (Waseda Rat No. 2) that behaves like a real rat. Proportion and DOF arrangement of WR-2 are designed based on those of a mature rat. This robot has four 3-DOF legs, a 2-DOF waist and a 1-DOF neck. A microcontroller and a wireless communication module are implemented on it. A battery is also implemented. Thus, it can walk, rear by limbs and groom its body.
LiveInventor: An Interactive Development Environment for Robot Autonomy
NASA Technical Reports Server (NTRS)
Neveu, Charles; Shirley, Mark
2003-01-01
LiveInventor is an interactive development environment for robot autonomy developed at NASA Ames Research Center. It extends the industry-standard OpenInventor graphics library and scenegraph file format to include kinetic and kinematic information, a physics-simulation library, an embedded Scheme interpreter, and a distributed communication system.
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco
2009-01-01
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
A multimodal interface for real-time soldier-robot teaming
NASA Astrophysics Data System (ADS)
Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.
2016-05-01
Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
Simulation of Robot Kinematics Using Interactive Computer Graphics.
ERIC Educational Resources Information Center
Leu, M. C.; Mahajan, R.
1984-01-01
Development of a robot simulation program based on geometric transformation softwares available in most computer graphics systems and program features are described. The program can be extended to simulate robots coordinating with external devices (such as tools, fixtures, conveyors) using geometric transformations to describe the…
Robotics Competitions: The Choice Is up to You!
ERIC Educational Resources Information Center
Johnson, Richard T.; Londt, Susan E.
2010-01-01
Competitive robotics as an interactive experience can increase the level of student participation in technology education, inspire students to consider careers in technical fields, and enhance the visibility of technology education programs. Implemented correctly, a competitive robotics program can provide a stimulating learning environment for…
Starting a Robotics Program in Your County
ERIC Educational Resources Information Center
Habib, Maria A.
2012-01-01
The current mission mandates of the National 4-H Headquarters are Citizenship, Healthy Living, and Science. Robotics programs are excellent in fulfilling the Science mandate. Robotics engages students in STEM (Science, Engineering, Technology, and Mathematics) fields by providing interactive, hands-on, minds-on, cross-disciplinary learning…
Broadbent, Elizabeth; Kumar, Vinayak; Li, Xingyan; Sollers, John; Stafford, Rebecca Q.; MacDonald, Bruce A.; Wegner, Daniel M.
2013-01-01
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot. PMID:24015263
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
NASA Astrophysics Data System (ADS)
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-02-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-01-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.
Mathur, Maya B; Reichling, David B
2016-01-01
Android robots are entering human social life. However, human-robot interactions may be complicated by a hypothetical Uncanny Valley (UV) in which imperfect human-likeness provokes dislike. Previous investigations using unnaturally blended images reported inconsistent UV effects. We demonstrate an UV in subjects' explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces. An "investment game" showed that the UV penetrated even more deeply to influence subjects' implicit decisions concerning robots' social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans. Preliminary evidence suggests category confusion may occur in the UV but does not mediate the likability effect. These findings suggest that while classic elements of human social psychology govern human-robot social interaction, robust UV effects pose a formidable android-specific problem. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Characteristics of Behavior of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Sato, Shigehiko; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.
Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung
2017-07-01
In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.
Estimating Tool–Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool
Zhao, Baoliang; Nelson, Carl A.
2016-01-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool–tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool–tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool–tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool–tissue interaction forces in real time, thereby increasing surgical efficiency and safety. PMID:27303591
Collective Behaviors of Mobile Robots Beyond the Nearest Neighbor Rules With Switching Topology.
Ning, Boda; Han, Qing-Long; Zuo, Zongyu; Jin, Jiong; Zheng, Jinchuan
2018-05-01
This paper is concerned with the collective behaviors of robots beyond the nearest neighbor rules, i.e., dispersion and flocking, when robots interact with others by applying an acute angle test (AAT)-based interaction rule. Different from a conventional nearest neighbor rule or its variations, the AAT-based interaction rule allows interactions with some far-neighbors and excludes unnecessary nearest neighbors. The resulting dispersion and flocking hold the advantages of scalability, connectivity, robustness, and effective area coverage. For the dispersion, a spring-like controller is proposed to achieve collision-free coordination. With switching topology, a new fixed-time consensus-based energy function is developed to guarantee the system stability. An upper bound of settling time for energy consensus is obtained, and a uniform time interval is accordingly set so that energy distribution is conducted in a fair manner. For the flocking, based on a class of generalized potential functions taking nonsmooth switching into account, a new controller is proposed to ensure that the same velocity for all robots is eventually reached. A co-optimizing problem is further investigated to accomplish additional tasks, such as enhancing communication performance, while maintaining the collective behaviors of mobile robots. Simulation results are presented to show the effectiveness of the theoretical results.
Estimating Tool-Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool.
Zhao, Baoliang; Nelson, Carl A
2016-10-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool-tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool-tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool-tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool-tissue interaction forces in real time, thereby increasing surgical efficiency and safety.
Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation
NASA Technical Reports Server (NTRS)
Birge, Brian
2012-01-01
The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.
Robust sensorimotor representation to physical interaction changes in humanoid motion learning.
Shimizu, Toshihiko; Saegusa, Ryo; Ikemoto, Shuhei; Ishiguro, Hiroshi; Metta, Giorgio
2015-05-01
This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.
Escape and surveillance asymmetries in locusts exposed to a Guinea fowl-mimicking robot predator.
Romano, Donato; Benelli, Giovanni; Stefanini, Cesare
2017-10-09
Escape and surveillance responses to predators are lateralized in several vertebrate species. However, little is known on the laterality of escapes and predator surveillance in arthropods. In this study, we investigated the lateralization of escape and surveillance responses in young instars and adults of Locusta migratoria during biomimetic interactions with a robot-predator inspired to the Guinea fowl, Numida meleagris. Results showed individual-level lateralization in the jumping escape of locusts exposed to the robot-predator attack. The laterality of this response was higher in L. migratoria adults over young instars. Furthermore, population-level lateralization of predator surveillance was found testing both L. migratoria adults and young instars; locusts used the right compound eye to oversee the robot-predator. Right-biased individuals were more stationary over left-biased ones during surveillance of the robot-predator. Individual-level lateralization could avoid predictability during the jumping escape. Population-level lateralization may improve coordination in the swarm during specific group tasks such as predator surveillance. To the best of our knowledge, this is the first report of lateralized predator-prey interactions in insects. Our findings outline the possibility of using biomimetic robots to study predator-prey interaction, avoiding the use of real predators, thus achieving standardized experimental conditions to investigate complex and flexible behaviours.
Using Lego robots to estimate cognitive ability in children who have severe physical disabilities.
Cook, Albert M; Adams, Kim; Volden, Joanne; Harbottle, Norma; Harbottle, Cheryl
2011-01-01
To determine whether low-cost robots provide a means by which children with severe disabilities can demonstrate understanding of cognitive concepts. Ten children, ages 4 to 10, diagnosed with cerebral palsy and related motor conditions, participated. Participants had widely variable motor, cognitive and receptive language skills, but all were non-speaking. A Lego Invention 'roverbot' was used to carry out a range of functional tasks from single-switch replay of pre-stored movements to total control of the movement in two dimensions. The level of sophistication achieved on hierarchically arranged play tasks was used to estimate cognitive skills. The 10 children performed at one of the six hierarchically arranged levels from 'no interaction' through 'simple cause and effect' to 'development and execution of a plan'. Teacher interviews revealed that children were interested in the robot, enjoyed interacting with it and demonstrated changes in behaviour and social and language skills following interaction. Children with severe physical disabilities can control a Lego robot to perform un-structured play tasks. In some cases, they were able to display more sophisticated cognitive skills through manipulating the robot than in traditional standardised tests. Success with the robot could be a proxy measure for children who have cognitive abilities but cannot demonstrate them in standard testing.
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
ERIC Educational Resources Information Center
Arita, A.; Hiraki, K.; Kanda, T.; Ishiguro, H.
2005-01-01
As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the…
Design of a simulation environment for laboratory management by robot organizations
NASA Technical Reports Server (NTRS)
Zeigler, Bernard P.; Cellier, Francois E.; Rozenblit, Jerzy W.
1988-01-01
This paper describes the basic concepts needed for a simulation environment capable of supporting the design of robot organizations for managing chemical, or similar, laboratories on the planned U.S. Space Station. The environment should facilitate a thorough study of the problems to be encountered in assigning the responsibility of managing a non-life-critical, but mission valuable, process to an organized group of robots. In the first phase of the work, we seek to employ the simulation environment to develop robot cognitive systems and strategies for effective multi-robot management of chemical experiments. Later phases will explore human-robot interaction and development of robot autonomy.
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Interactive language learning by robots: the transition from babbling to word forms.
Lyon, Caroline; Nehaniv, Chrystopher L; Saunders, Joe
2012-01-01
The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.
Affordance Templates for Shared Robot Control
NASA Technical Reports Server (NTRS)
Hart, Stephen; Dinh, Paul; Hambuchen, Kim
2014-01-01
This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.
Wang, Yin
2015-01-01
Notwithstanding the significant role that human–robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human–human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal–parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. PMID:25911418
Liu, Yanjie; Han, Haijun; Liu, Tao; Yi, Jingang; Li, Qingguo; Inoue, Yoshio
2016-01-01
Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. PMID:27023545
Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain
2015-01-01
In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.
2010-01-01
Background Manual body weight supported treadmill training and robot-aided treadmill training are frequently used techniques for the gait rehabilitation of individuals after stroke and spinal cord injury. Current evidence suggests that robot-aided gait training may be improved by making robotic behavior more patient-cooperative. In this study, we have investigated the immediate effects of patient-cooperative versus non-cooperative robot-aided gait training on individuals with incomplete spinal cord injury (iSCI). Methods Eleven patients with iSCI participated in a single training session with the gait rehabilitation robot Lokomat. The patients were exposed to four different training modes in random order: During both non-cooperative position control and compliant impedance control, fixed timing of movements was provided. During two variants of the patient-cooperative path control approach, free timing of movements was enabled and the robot provided only spatial guidance. The two variants of the path control approach differed in the amount of additional support, which was either individually adjusted or exaggerated. Joint angles and torques of the robot as well as muscle activity and heart rate of the patients were recorded. Kinematic variability, interaction torques, heart rate and muscle activity were compared between the different conditions. Results Patients showed more spatial and temporal kinematic variability, reduced interaction torques, a higher increase of heart rate and more muscle activity in the patient-cooperative path control mode with individually adjusted support than in the non-cooperative position control mode. In the compliant impedance control mode, spatial kinematic variability was increased and interaction torques were reduced, but temporal kinematic variability, heart rate and muscle activity were not significantly higher than in the position control mode. Conclusions Patient-cooperative robot-aided gait training with free timing of movements made individuals with iSCI participate more actively and with larger kinematic variability than non-cooperative, position-controlled robot-aided gait training. PMID:20828422
Cortellessa, Gabriella; Fracasso, Francesca; Sorrentino, Alessandra; Orlandini, Andrea; Bernardi, Giulio; Coraci, Luca; De Benedictis, Riccardo; Cesta, Amedeo
2018-02-01
This article describes an enhanced telepresence robot named ROBIN, part of a telecare system derived from the GIRAFFPLUS project for supporting and monitoring older adults at home. ROBIN is integrated in a sensor-rich environment that aims to continuously monitor physical and psychological wellbeing of older persons living alone. The caregivers (formal/informal) can communicate through it with their assisted persons. Long-term trials in real houses highlighted several user requirements that inspired improvements on the robotic platform. The enhanced telepresence robot was assessed by users to test its suitability to support social interaction and provide motivational feedback on health-related aspects. Twenty-five users (n = 25) assessed the new multimodal interaction capabilities and new communication services. A psychophysiological approach was adopted to investigate aspects like engagement, usability, and affective impact, as well as the possible role of individual differences on the quality of human-robot interaction. ROBIN was overall judged usable, the interaction with/through it resulted pleasant and the required workload was limited, thus supporting the idea of using it as a central component for remote assistance and social participation. Open-minded users tended to have a more positive interaction with it. This work describes an enabling technology for remote assistance and social communication. It highlights the importance of being compliant with users' needs to develop solutions easy to use and able to foster their social connections. The role of personality appeared to be relevant for the interaction, underscoring a clear role of the service personalization.
Group sessions with Paro in a nursing home: Structure, observations and interviews.
Robinson, Hayley; Broadbent, Elizabeth; MacDonald, Bruce
2016-06-01
We recently reported that a companion robot reduced residents' loneliness in a randomised controlled trial at an aged-care facility. This report aims to provide additional, previously unpublished data about how the sessions were run, residents' interactions with the robot and staff perspectives. Observations were conducted focusing on engagement, how residents treated the robot and if the robot acted as a social catalyst. In addition, 16 residents and 21 staff were asked open-ended questions at the end of the study about the sessions and the robot. Observations indicated that some residents engaged on an emotional level with Paro, and Paro was treated as both an agent and an artificial object. Interviews revealed that residents enjoyed sharing, interacting with and talking about Paro. This study supports other research showing Paro has psychosocial benefits and provides a guide for those wishing to use Paro in a group setting in aged care. © 2015 AJA Inc.
A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot
Kim, Min Su; Lee, Jae Geun; Kang, Soon Ju
2016-01-01
In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots “Pet-Bot”. To implement this function in a robot, in this paper we introduce a self-organizing interaction and synchronization method between wearable devices and Pet-Bots. First, the Pet-Bot opportunistically identifies its owner without any human intervention, which means that the robot self-identifies the owner’s approach on its own. Second, Pet-Bot’s activity is synchronized with the owner’s behavior. Lastly, the robot frequently encounters uncertain situations (e.g., when the robot goes ahead of the owner but meets a situation where it cannot make a decision, or the owner wants to stop the Pet-Bot synchronization mode to relax). In this case, we have adopted a gesture recognition function that uses a 3-D accelerometer in the wearable device. In order to achieve the interaction and synchronization in real-time, we use two wireless communication protocols: 125 kHz low-frequency (LF) and 2.4 GHz Bluetooth low energy (BLE). We conducted experiments using a prototype Pet-Bot and wearable devices to verify their motion recognition of and synchronization with humans in real-time. The results showed a guaranteed level of accuracy of at least 94%. A trajectory test was also performed to demonstrate the robot’s control performance when following or leading a human in real-time. PMID:27338384
Body-terrain interaction affects large bump traversal of insects and legged robots.
Gart, Sean W; Li, Chen
2018-02-02
Small animals and robots must often rapidly traverse large bump-like obstacles when moving through complex 3D terrains, during which, in addition to leg-ground contact, their body inevitably comes into physical contact with the obstacles. However, we know little about the performance limits of large bump traversal and how body-terrain interaction affects traversal. To address these, we challenged the discoid cockroach and an open-loop six-legged robot to dynamically run into a large bump of varying height to discover the maximal traversal performance, and studied how locomotor modes and traversal performance are affected by body-terrain interaction. Remarkably, during rapid running, both the animal and the robot were capable of dynamically traversing a bump much higher than its hip height (up to 4 times the hip height for the animal and 3 times for the robot, respectively) at traversal speeds typical of running, with decreasing traversal probability with increasing bump height. A stability analysis using a novel locomotion energy landscape model explained why traversal was more likely when the animal or robot approached the bump with a low initial body yaw and a high initial body pitch, and why deflection was more likely otherwise. Inspired by these principles, we demonstrated a novel control strategy of active body pitching that increased the robot's maximal traversable bump height by 75%. Our study is a major step in establishing the framework of locomotion energy landscapes to understand locomotion in complex 3D terrains.
d'Elia, Nicolò; Vanetti, Federica; Cempini, Marco; Pasquini, Guido; Parri, Andrea; Rabuffetti, Marco; Ferrarin, Maurizio; Molino Lova, Raffaele; Vitiello, Nicola
2017-04-14
In human-centered robotics, exoskeletons are becoming relevant for addressing needs in the healthcare and industrial domains. Owing to their close interaction with the user, the safety and ergonomics of these systems are critical design features that require systematic evaluation methodologies. Proper transfer of mechanical power requires optimal tuning of the kinematic coupling between the robotic and anatomical joint rotation axes. We present the methods and results of an experimental evaluation of the physical interaction with an active pelvis orthosis (APO). This device was designed to effectively assist in hip flexion-extension during locomotion with a minimum impact on the physiological human kinematics, owing to a set of passive degrees of freedom for self-alignment of the human and robotic hip flexion-extension axes. Five healthy volunteers walked on a treadmill at different speeds without and with the APO under different levels of assistance. The user-APO physical interaction was evaluated in terms of: (i) the deviation of human lower-limb joint kinematics when wearing the APO with respect to the physiological behavior (i.e., without the APO); (ii) relative displacements between the APO orthotic shells and the corresponding body segments; and (iii) the discrepancy between the kinematics of the APO and the wearer's hip joints. The results show: (i) negligible interference of the APO in human kinematics under all the experimented conditions; (ii) small (i.e., < 1 cm) relative displacements between the APO cuffs and the corresponding body segments (called stability); and (iii) significant increment in the human-robot kinematics discrepancy at the hip flexion-extension joint associated with speed and assistance level increase. APO mechanics and actuation have negligible interference in human locomotion. Human kinematics was not affected by the APO under all tested conditions. In addition, under all tested conditions, there was no relevant relative displacement between the orthotic cuffs and the corresponding anatomical segments. Hence, the physical human-robot coupling is reliable. These facts prove that the adopted mechanical design of passive degrees of freedom allows an effective human-robot kinematic coupling. We believe that this analysis may be useful for the definition of evaluation metrics for the ergonomics assessment of wearable robots.
Using Empathy to Improve Human-Robot Relationships
NASA Astrophysics Data System (ADS)
Pereira, André; Leite, Iolanda; Mascarenhas, Samuel; Martinho, Carlos; Paiva, Ana
For robots to become our personal companions in the future, they need to know how to socially interact with us. One defining characteristic of human social behaviour is empathy. In this paper, we present a robot that acts as a social companion expressing different kinds of empathic behaviours through its facial expressions and utterances. The robot comments the moves of two subjects playing a chess game against each other, being empathic to one of them and neutral towards the other. The results of a pilot study suggest that users to whom the robot was empathic perceived the robot more as a friend.
Brief Report: Development of a Robotic Intervention Platform for Young Children with ASD
ERIC Educational Resources Information Center
Warren, Zachary; Zheng, Zhi; Das, Shuvajit; Young, Eric M.; Swanson, Amy; Weitlauf, Amy; Sarkar, Nilanjan
2015-01-01
Increasingly researchers are attempting to develop robotic technologies for children with autism spectrum disorder (ASD). This pilot study investigated the development and application of a novel robotic system capable of dynamic, adaptive, and autonomous interaction during imitation tasks with embedded real-time performance evaluation and…
Guerrero, Carlos Rodriguez; Fraile Marinero, Juan Carlos; Turiel, Javier Perez; Muñoz, Victor
2013-11-01
Human motor performance, speed and variability are highly susceptible to emotional states. This paper reviews the impact of the emotions on the motor control performance, and studies the possibility of improving the perceived skill/challenge relation on a multimodal neural rehabilitation scenario, by means of a biocybernetic controller that modulates the assistance provided by a haptic controlled robot in reaction to undesirable physical and mental states. Results from psychophysiological, performance and self assessment data for closed loop experiments in contrast with their open loop counterparts, suggest that the proposed method had a positive impact on the overall challenge/skill relation leading to an enhanced physical human-robot interaction experience. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Flexible robotic entry device for a nuclear materials production reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckendorn, F.M. II
1988-01-01
The Savannah River Laboratory has developed and is implementing a flexible robotic entry device (FRED) for the nuclear materials production reactors now operating at the Savannah River Plant (SRP). FRED is designed for rapid deployment into confinement areas of operating reactors to assess unknown conditions. A unique smart tether method has been incorporated into FRED for simultaneous bidirectional transmission of multiple video/audio/control/power signals over a single coaxial cable. This system makes it possible to use FRED under all operating and standby conditions, including those where radio/microwave transmissions are not possible or permitted, and increases the quantity of data available.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
The role of automation and artificial intelligence
NASA Astrophysics Data System (ADS)
Schappell, R. T.
1983-07-01
Consideration is given to emerging technologies that are not currently in common use, yet will be mature enough for implementation in a space station. Artificial intelligence (AI) will permit more autonomous operation and improve the man-machine interfaces. Technology goals include the development of expert systems, a natural language query system, automated planning systems, and AI image understanding systems. Intelligent robots and teleoperators will be needed, together with improved sensory systems for the robotics, housekeeping, vehicle control, and spacecraft housekeeping systems. Finally, NASA is developing the ROBSIM computer program to evaluate level of automation, perform parametric studies and error analyses, optimize trajectories and control systems, and assess AI technology.
ACS (Alma Common Software) operating a set of robotic telescopes
NASA Astrophysics Data System (ADS)
Westhues, C.; Ramolla, M.; Lemke, R.; Haas, M.; Drass, H.; Chini, R.
2014-07-01
We use the ALMA Common Software (ACS) to establish a unified middleware for robotic observations with the 40cm Optical, 80cm Infrared and 1.5m Hexapod telescopes located at OCA (Observatorio Cerro Armazones) and the ESO 1-m located at La Silla. ACS permits to hide from the observer the technical specifications, like mount-type or camera-model. Furthermore ACS provides a uniform interface to the different telescopes, allowing us to run the same planning program for each telescope. Observations are carried out for long-term monitoring campaigns to study the variability of stars and AGN. We present here the specific implementation to the different telescopes.
Frameless robotically targeted stereotactic brain biopsy: feasibility, diagnostic yield, and safety.
Bekelis, Kimon; Radwan, Tarek A; Desai, Atman; Roberts, David W
2012-05-01
Frameless stereotactic brain biopsy has become an established procedure in many neurosurgical centers worldwide. Robotic modifications of image-guided frameless stereotaxy hold promise for making these procedures safer, more effective, and more efficient. The authors hypothesized that robotic brain biopsy is a safe, accurate procedure, with a high diagnostic yield and a safety profile comparable to other stereotactic biopsy methods. This retrospective study included 41 patients undergoing frameless stereotactic brain biopsy of lesions (mean size 2.9 cm) for diagnostic purposes. All patients underwent image-guided, robotic biopsy in which the SurgiScope system was used in conjunction with scalp fiducial markers and a preoperatively selected target and trajectory. Forty-five procedures, with 50 supratentorial targets selected, were performed. The mean operative time was 44.6 minutes for the robotic biopsy procedures. This decreased over the second half of the study by 37%, from 54.7 to 34.5 minutes (p < 0.025). The diagnostic yield was 97.8% per procedure, with a second procedure being diagnostic in the single nondiagnostic case. Complications included one transient worsening of a preexisting deficit (2%) and another deficit that was permanent (2%). There were no infections. Robotic biopsy involving a preselected target and trajectory is safe, accurate, efficient, and comparable to other procedures employing either frame-based stereotaxy or frameless, nonrobotic stereotaxy. It permits biopsy in all patients, including those with small target lesions. Robotic biopsy planning facilitates careful preoperative study and optimization of needle trajectory to avoid sulcal vessels, bridging veins, and ventricular penetration.
Perception and Perspective in Robotics
2003-01-01
data, the bottom row shows the segmented views that are tized to just two luminance levels. The dark line cen- the best match with these prototypes. The...and Mataric , 1999) for one effort in the ate an active, developing, malleable perceptual system robotic domain). The human interacting with the robot...learning will be im- Robot s an Sys(ems,)volumenI,pleentd. heinstructor demonstrates the task while Goldberg, D). and Mataric , M. 1. (1999
NASA Astrophysics Data System (ADS)
Kozyrev, Iu. G.
Topics covered include terms, definitions, and classification; operator-directed manipulators; autooperators as used in automated pressure casting; construction and application of industrial robots; and the operating bases of automated systems. Attention is given to adaptive and interactive robots; gripping mechanisms; applications to foundary production, press-forging plants, heat treatment, welding, and assembly operations. A review of design recommendations includes a determination of fundamental structural and technological indicators for industrial robots and a consideration of drive mechanisms.
Dynamic electronic institutions in agent oriented cloud robotic systems.
Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice
2015-01-01
The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.
Progress in EEG-Based Brain Robot Interaction Systems
Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe
2017-01-01
The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488
A haptic sensing upgrade for the current EOD robotic fleet
NASA Astrophysics Data System (ADS)
Rowe, Patrick
2014-06-01
The past decade and a half has seen a tremendous rise in the use of mobile manipulator robotic platforms for bomb inspection and disposal, explosive ordnance disposal, and other extremely hazardous tasks in both military and civilian settings. Skilled operators are able to control these robotic vehicles in amazing ways given the very limited situational awareness obtained from a few on-board camera views. Future generations of robotic platforms will, no doubt, provide some sort of additional force or haptic sensor feedback to further enhance the operator's interaction with the robot, especially when dealing with fragile, unstable, and explosive objects. Unfortunately, the robot operators need this capability today. This paper discusses an approach to provide existing (and future) robotic mobile manipulator platforms, with which trained operators are already familiar and highly proficient, this desired haptic and force feedback capability. The goals of this technology are to be rugged, reliable, and affordable. It should also be able to be applied to a wide range of existing robots with a wide variety of manipulator/gripper sizes and styles. Finally, the presentation of the haptic information to the operator is discussed, given the fact that control devices that physically interact with the operators are not widely available and still in the research stages.
Pragmatic Frames for Teaching and Learning in Human–Robot Interaction: Review and Challenges
Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J.; Oudeyer, Pierre-Yves
2016-01-01
One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning–teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human–human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching. PMID:27752242
Pragmatic Frames for Teaching and Learning in Human-Robot Interaction: Review and Challenges.
Vollmer, Anna-Lisa; Wrede, Britta; Rohlfing, Katharina J; Oudeyer, Pierre-Yves
2016-01-01
One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching.
Use of pharmacy delivery robots in intensive care units.
Summerfield, Marc R; Seagull, F Jacob; Vaidya, Neelesh; Xiao, Yan
2011-01-01
The use of pharmacy delivery robots in an institution's intensive care units was evaluated. In 2003, the University of Maryland Medical Center (UMMC) began a pilot program to determine the logistic capability and functional utility of robotic technology in the delivery of medications from satellite pharmacies to patient care units. Three satellite pharmacies currently used the robotic system. Five data sources (electronic robot activation records, logs, interviews, surveys, and observations) were used to assess five key aspects of robotic delivery: robot use, reliability, timeliness, cost minimization, and acceptance. A 19-item survey using a 7-point Likert-type scale was developed to determine if pharmacy delivery robots changed nurses' perception of pharmacy service. The components measured included general satisfaction, reliability, timeliness, stat orders, services, interaction with pharmacy, and status tracking. A total of 23 pre-implementation, 96 post-implementation, and 30 two-year follow-up surveys were completed. After implementation of the robotic delivery system, time from fax to label, order preparation time, and idle time for medications to be delivered decreased, while nurses' general satisfaction with the pharmacy and opinion of the reliability of pharmacy delivery significantly increased. Robotic delivery did not influence the perceived quality of delivery service or the timeliness of orders or stat orders. Robot reliability was a major issue for the technician but not for pharmacists, who did not have as much interaction with the devices. By considering the needs of UMMC and its patients and matching them with available technology, the institution was able to improve the medication-use process and timeliness of medication departure from the pharmacy.
Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong
2017-01-01
Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567
Gácsi, Márta; Szakadát, Sára; Miklósi, Adám
2013-01-01
These studies are part of a project aiming to reveal relevant aspects of human-dog interactions, which could serve as a model to design successful human-robot interactions. Presently there are no successfully commercialized assistance robots, however, assistance dogs work efficiently as partners for persons with disabilities. In Study 1, we analyzed the cooperation of 32 assistance dog-owner dyads performing a carrying task. We revealed typical behavior sequences and also differences depending on the dyads' experiences and on whether the owner was a wheelchair user. In Study 2, we investigated dogs' responses to unforeseen difficulties during a retrieving task in two contexts. Dogs displayed specific communicative and displacement behaviors, and a strong commitment to execute the insoluble task. Questionnaire data from Study 3 confirmed that these behaviors could successfully attenuate owners' disappointment. Although owners anticipated the technical competence of future assistance robots to be moderate/high, they could not imagine robots as emotional companions, which negatively affected their acceptance ratings of future robotic assistants. We propose that assistance dogs' cooperative behaviors and problem solving strategies should inspire the development of the relevant functions and social behaviors of assistance robots with limited manual and verbal skills.
Influence of facial feedback during a cooperative human-robot task in schizophrenia.
Cohen, Laura; Khoramshahi, Mahdi; Salesse, Robin N; Bortolon, Catherine; Słowiński, Piotr; Zhai, Chao; Tsaneva-Atanasova, Krasimira; Di Bernardo, Mario; Capdevielle, Delphine; Marin, Ludovic; Schmidt, Richard C; Bardy, Benoit G; Billard, Aude; Raffard, Stéphane
2017-11-03
Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.
Lee, Kit-Hang; Fu, Denny K C; Leong, Martin C W; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong; Kwok, Ka-Wai
2017-12-01
Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments.
Learning compliant manipulation through kinesthetic and tactile human-robot interaction.
Kronander, Klas; Billard, Aude
2014-01-01
Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan
2013-06-07
Millions of children in North America receive an annual flu vaccination, many of whom are at risk of experiencing severe distress. Millions of children also use technologically advanced devices such as computers and cell phones. Based on this familiarity, we introduced another sophisticated device - a humanoid robot - to interact with children during their vaccination. We hypothesized that these children would experience less pain and distress than children who did not have this interaction. This was a randomized controlled study in which 57 children (30 male; age, mean±SD: 6.87±1.34 years) were randomly assigned to a vaccination session with a nurse who used standard administration procedures, or with a robot who was programmed to use cognitive-behavioral strategies with them while a nurse administered the vaccination. Measures of pain and distress were completed by children, parents, nurses, and researchers. Multivariate analyses of variance indicated that interaction with a robot during flu vaccination resulted in significantly less pain and distress in children according to parent, child, nurse, and researcher ratings with effect sizes in the moderate to high range (Cohen's d=0.49-0.90). This is the first study to examine the effectiveness of child-robot interaction for reducing children's pain and distress during a medical procedure. All measures of reduction were significant. These findings suggest that further research on robotics at the bedside is warranted to determine how they can effectively help children manage painful medical procedures. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
An affordable compact humanoid robot for Autism Spectrum Disorder interventions in children.
Dickstein-Fischer, Laurie; Alexander, Elizabeth; Yan, Xiaoan; Su, Hao; Harrington, Kevin; Fischer, Gregory S
2011-01-01
Autism Spectrum Disorder impacts an ever-increasing number of children. The disorder is marked by social functioning that is characterized by impairment in the use of nonverbal behaviors, failure to develop appropriate peer relationships and lack of social and emotional exchanges. Providing early intervention through the modality of play therapy has been effective in improving behavioral and social outcomes for children with autism. Interacting with humanoid robots that provide simple emotional response and interaction has been shown to improve the communication skills of autistic children. In particular, early intervention and continuous care provide significantly better outcomes. Currently, there are no robots capable of meeting these requirements that are both low-cost and available to families of autistic children for in-home use. This paper proposes the piloting the use of robotics as an improved diagnostic and early intervention tool for autistic children that is affordable, non-threatening, durable, and capable of interacting with an autistic child. This robot has the ability to track the child with its 3 degree of freedom (DOF) eyes and 3-DOF head, open and close its 1-DOF beak and 1-DOF each eyelids, raise its 1-DOF each wings, play sound, and record sound. These attributes will give it the ability to be used for the diagnosis and treatment of autism. As part of this project, the robot and the electronic and control software have been developed, and integrating semi-autonomous interaction, teleoperation from a remote healthcare provider and initiating trials with children in a local clinic are in progress.
NASA Technical Reports Server (NTRS)
Rice, J. W., Jr.; Smith, P. H.; Marshall, J. R.
1999-01-01
The first microscopic sedimentological studies of the Martian surface will commence with the landing of the Mars Polar Lander (MPL) December 3, 1999. The Robotic Arm Camera (RAC) has a resolution of 25 um/p which will permit detailed micromorphological analysis of surface and subsurface materials. The Robotic Ann will be able to dig up to 50 cm below the surface. The walls of the trench will also be inspected by RAC to look for evidence of stratigraphic and / or sedimentological relationships. The 2001 Mars Lander will build upon and expand the sedimentological research begun by the RAC on MPL. This will be accomplished by: (1) Macroscopic (dm to cm): Descent Imager, Pancam, RAC; (2) Microscopic (mm to um RAC, MECA Optical Microscope (Figure 2), AFM This paper will focus on investigations that can be conducted by the RAC and MECA Optical Microscope.
Six degree-of-freedom scanning supports and manipulators based on parallel robots
NASA Astrophysics Data System (ADS)
Comin, Fabio
1995-02-01
The exploitation of third generation SR sources heavily relies on accurate and stable positioning and scanning of samples and optical elements. In some cases, active feedback is also necessary. Normally, these tasks are carried out by serial addition of individual components, each of them providing a well-defined excursion path. On the contrary, the exploitation of the concept of parallel robots, structures in close cinematic chain, permits us to follow any given trajectory in the six-dimensional space with a large increase in accuracy and stiffness. At ESRF, the parallel robot architecture conceived some tens of years ago for flight simulators has been adapted to both actively align and operate optical elements of considerable weight and position small samples in ultrahigh vacuum. The performance of these devices gives results far superior to the initial specification and a variety of drive mechanisms are being developed to fit the different needs of the ESRF beamlines.
Conference on Intelligent Robotics in Field, Factory, Service and Space (CIRFFSS 1994), Volume 2
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1994-01-01
The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservations can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed the following topics: (1) vision systems integration and architecture; (2) selective perception and human robot interaction; (3) robotic systems technology; (4) military and other field applications; (5) dual-use precommercial robotic technology; (6) building operations; (7) planetary exploration applications; (8) planning; (9) new directions in robotics; and (10) commercialization.
Human-robot interaction tests on a novel robot for gait assistance.
Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio
2013-06-01
This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.
A Multimodal Emotion Detection System during Human-Robot Interaction
Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.
2013-01-01
In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598
Baisch, Stefanie; Kolling, Thorsten; Rühl, Saskia; Klein, Barbara; Pantel, Johannes; Oswald, Frank; Knopf, Monika
2018-01-01
It has been questioned by researchers in robotics as well in the general public to what extent companion-type robots can support the elderly with the fulfillment of their psychological and social needs. Although these robots have already been used in care settings in Germany, research has referred little to this practical experience in order to analyze their impact and benefit. To start to close this gap, the current article reports on the current use of companion-type robots in care settings, on the effects reported by professional caregivers as well as on the role of psychosocial needs in the acceptance and use of companion-type robots by the elderly. In the first study, 30 professional caregivers with experience in the use of the robot seal Paro in care settings were interviewed regarding Paro's application and the observed effects on their clients. In the second study, three case examples are presented from an interaction study in which vulnerable elderly persons had the robot dinosaur Pleo at their disposal for a maximum period of 15 days. Paro is used very flexibly in a variety of settings and with a broad range of user groups (study 1). The reported psychosocial effects were mainly positive but short term. The case examples (study 2) show that psychosocial needs can both foster or hinder robot acceptance and use. They also emphasize the important role of caregivers in the interaction between the elderly and emotional robots in the context of eldercare. The beneficial and ethical use of companion-type robots in care settings demands a high commitment on the part of the caregivers. Given this prerequisite, emotional robots can be a valuable therapeutic tool.
Robotic Seals as Therapeutic Tools in an Aged Care Facility: A Qualitative Study
Bodak, Marie; Barlas, Joanna; Harwood, June; Pether, Mary
2016-01-01
Robots, including robotic seals, have been used as an alternative to therapies such as animal assisted therapy in the promotion of health and social wellbeing of older people in aged care facilities. There is limited research available that evaluates the effectiveness of robot therapies in these settings. The aim of this study was to identify, explore, and describe the impact of the use of Paro robotic seals in an aged care facility in a regional Australian city. A qualitative, descriptive, exploratory design was employed. Data were gathered through interviews with the three recreational therapists employed at the facility who were also asked to maintain logs of their interactions with the Paro and residents. Data were transcribed and thematically analysed. Three major themes were identified from the analyses of these data: “a therapeutic tool that's not for everybody,” “every interaction is powerful,” and “keeping the momentum.” Findings support the use of Paro as a therapeutic tool, revealing improvement in emotional state, reduction of challenging behaviours, and improvement in social interactions of residents. The potential benefits justify the investment in Paro, with clear evidence that these tools can have a positive impact that warrants further exploration. PMID:27990301
Physical Student-Robot Interaction with the ETHZ Haptic Paddle
ERIC Educational Resources Information Center
Gassert, R.; Metzger, J.; Leuenberger, K.; Popp, W. L.; Tucker, M. R.; Vigaru, B.; Zimmermann, R.; Lambercy, O.
2013-01-01
Haptic paddles--low-cost one-degree-of-freedom force feedback devices--have been used with great success at several universities throughout the US to teach the basic concepts of dynamic systems and physical human-robot interaction (pHRI) to students. The ETHZ haptic paddle was developed for a new pHRI course offered in the undergraduate…
Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things
2003-06-01
visions behind the Cog Project were to build a "robot baby ", which could interact with people and objects, imitate the motions of its teachers, and even...though. A very elaborate animatronic motor controller can produce very life-like canned motion, although the controller itself bears little resemblance
Design of a Behavior of Robot That Attracts the Interest of the Mildly Demented Elderly.
Nihei, Misato; Sakuma, Natsuki; Yabe, Hiroyuki; Kamata, Minoru; Inoue, Takenobu
2017-01-01
In this study, using the unexpected intervention overturning the interaction amount of the field and the mental model, an interaction of a robot system that enables sustained nonverbal communication with the mildly demented elderly was proposed and its effectiveness was shown in the group home of the mildly demented elderly.
Types of verbal interaction with instructable robots
NASA Technical Reports Server (NTRS)
Crangle, C.; Suppes, P.; Michalowski, S.
1987-01-01
An instructable robot is one that accepts instruction in some natural language such as English and uses that instruction to extend its basic repertoire of actions. Such robots are quite different in conception from autonomously intelligent robots, which provide the impetus for much of the research on inference and planning in artificial intelligence. Examined here are the significant problem areas in the design of robots that learn from vebal instruction. Examples are drawn primarily from our earlier work on instructable robots and recent work on the Robotic Aid for the physically disabled. Natural-language understanding by machines is discussed as well as in the possibilities and limits of verbal instruction. The core problem of verbal instruction, namely, how to achieve specific concrete action in the robot in response to commands that express general intentions, is considered, as are two major challenges to instructability: achieving appropriate real-time behavior in the robot, and extending the robot's language capabilities.
Lounging with robots--social spaces of residents in care: A comparison trial.
Peri, Kathryn; Kerse, Ngaire; Broadbent, Elizabeth; Jayawardena, Chandimal; Kuo, Tony; Datta, Chandan; Stafford, Rebecca; MacDonald, Bruce
2016-03-01
To investigate whether robots could reduce resident sleeping and stimulate activity in the lounges of an older persons' care facility. Non-randomised controlled trial over a 12-week period. The intervention involved situating robots in low-level and high-dependency ward lounges and a comparison with similar lounges without robots. A time sampling observation method was utilised to observe resident behaviour, including sleep and activities over periods of time, to compare interactions in robot and no robot lounges. The use of robots was modest; overall 13% of residents in robot lounges used the robot. Utilisation was higher in the low-level care lounges; on average, 23% used the robot, whereas in high-level care lounges, the television being on was the strongest predictor of sleep. This study found that having robots in lounges was mostly a positive experience. The amount of time residents slept during the day was significantly less in low-level care lounges that had a robot. © 2015 AJA Inc.
The Role of Reciprocity in Verbally Persuasive Robots.
Lee, Seungcheol Austin; Liang, Yuhua Jake
2016-08-01
The current research examines the persuasive effects of reciprocity in the context of human-robot interaction. This is an important theoretical and practical extension of persuasive robotics by testing (1) if robots can utilize verbal requests and (2) if robots can utilize persuasive mechanisms (e.g., reciprocity) to gain human compliance. Participants played a trivia game with a robot teammate. The ostensibly autonomous robot helped (or failed to help) the participants by providing the correct (vs. incorrect) trivia answers. Then, the robot directly asked participants to complete a 15-minute task for pattern recognition. Compared to no help, results showed that a robot's prior helping behavior significantly increased the likelihood of compliance (60 percent vs. 33 percent). Interestingly, participants' evaluations toward the robot (i.e., competence, warmth, and trustworthiness) did not predict compliance. These results also provided an insightful comparison showing that participants complied at similar rates with the robot and with computer agents. This result documents a clear empirically powerful potential for the role of verbal messages in persuasive robotics.
Research on wheelchair robot control system based on EOG
NASA Astrophysics Data System (ADS)
Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo
2018-04-01
The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.
Sample Return Robot Centennial Challenge
2012-06-16
NASA Program Manager for Centennial Challenges Sam Ortega help show a young visitor how to drive a rover as part of the interactive NASA Mars rover exhibit during the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event that was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge on Saturday, June 16, 2012 in Worcester, Mass. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Pu, Lihui; Moyle, Wendy; Jones, Cindy; Todorovic, Michael
2018-06-12
Social robots may promote the health of older adults by increasing their perceived emotional support and social interaction. This review aims to summarize the effectiveness of social robots on outcomes (psychological, physiological, quality of life, or medications) of older adults from randomized controlled trials (RCTs). A mixed-method systematic review of RCTs meeting the study inclusion criteria was undertaken. Eight databases were electronically searched up to September 2017. Participants' characteristics, intervention features, and outcome data were retrieved. The mean difference and standardized mean difference with 95% confidence intervals (CI) were synthesized to pool the effect size. A total of 13 articles from 11 RCTs were identified from 2,204 articles, of which 9 studies were included in the meta-analysis. Risk of bias was relatively high in allocation concealment and blinding. Social robots appeared to have positive impacts on agitation, anxiety, and quality of life for older adults but no statistical significance was found in the meta-analysis. However, results from a narrative review indicated that social robot interactions could improve engagement, interaction, and stress indicators, as well as reduce loneliness and the use of medications for older adults. Social robots appear to have the potential to improve the well-being of older adults, but conclusions are limited due to the lack of high-quality studies. More RCTs are recommended with larger sample sizes and rigorous study designs.
Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.
Yoo, Bum-Soo; Kim, Jong-Hwan
2015-09-01
During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.
Learning models of Human-Robot Interaction from small data
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G.; Heinz, Jeffrey
2018-01-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets. PMID:29492408
Learning models of Human-Robot Interaction from small data.
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G; Heinz, Jeffrey
2017-07-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets.
Can Children Have a Relationship with a Robot?
NASA Astrophysics Data System (ADS)
Beran, Tanya N.; Ramirez-Serrano, Alejandro
As the development of autonomous robots has moved towards creating social robots, children's interactions with robots will soon need to be investigated. This paper examines how children think about and attribute features of friendship to a robot. A total of 184 children between ages 5 to 16 years visiting a science centre were randomly selected to participate in an experiment with an approximate even number of boys and girls. Children were interviewed after observing a traditional small 5 degree of freedom robot arm, perform a block stacking task. A set of experiments was conducted to measure children's perceptions of affiliation with the robot. Content analysis revealed that a large majority would consider a relationship with the robot, and participate in friendship-type behaviors with it. Significant sex differences in how children ascribe characteristics of friendship to a robot were also found.
From Sci-Fi to Reality--Mobile Robots Get the Job Done
ERIC Educational Resources Information Center
Roman, Harry T.
2006-01-01
Robots are simply computers that can interact with their environment. Some are fixed in place in industrial assembly plants for cars, appliances, micro electronic circuitry, and pharmaceuticals. Another important category of robots is the mobiles, machines that can be driven to the workplace, often designed for hazardous duty operation or…
Self-evaluation on Motion Adaptation for Service Robots
NASA Astrophysics Data System (ADS)
Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru
We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.
Scano, A; Chiavenna, A; Caimmi, M; Malosio, M; Tosatti, L M; Molteni, F
2017-07-01
Robot-assisted training is a widely used technique to promote motor re-learning on post-stroke patients that suffer from motor impairment. While it is commonly accepted that robot-based therapies are potentially helpful, strong insights about their efficacy are still lacking. The motor re-learning process may act on muscular synergies, which are groups of co-activating muscles that, being controlled as a synergic group, allow simplifying the problem of motor control. In fact, by coordinating a reduced amount of neural signals, complex motor patterns can be elicited. This paper aims at analyzing the effects of robot assistance during 3D-reaching movements in the framework of muscular synergies. 5 healthy people and 3 neurological patients performed free and robot-assisted reaching movements at 2 different speeds (slow and quasi-physiological). EMG recordings were used to extract muscular synergies. Results indicate that the interaction with the robot very slightly alters healthy people patterns but, on the contrary, it may promote the emergency of physiological-like synergies on neurological patients.
Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel
2017-01-01
The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a “control software” for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction. PMID:28713296
Rouaix, Natacha; Retru-Chavastel, Laure; Rigaud, Anne-Sophie; Monnet, Clotilde; Lenoir, Hermine; Pino, Maribel
2017-01-01
The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot's behavioral and affective style might promote engagement of persons with dementia (PwD) in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for PwD. We examined the robot's potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist's assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a "control software" for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant's engagement (robot's interaction style, personalization of contents). In the pilot assessment of the intervention nine PwD (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant's engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction (HRI). Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of PwD in the therapeutic intervention increasing their immediate wellbeing and satisfaction.
Understanding of Android-Based Robotic and Game Structure
NASA Astrophysics Data System (ADS)
Phongtraychack, A.; Syryamkin, V.
2018-05-01
The development of an android with impressive lifelike appearance and behavior has been a long-standing goal in robotics and a new and exciting approach of smartphone-based robotics for research and education. Recent years have been progressive for many technologies, which allowed creating such androids. There are different examples including the autonomous Erica android system capable of conversational interaction and speech synthesis technologies. The behavior of Android-based robot could be running on the phone as the robot performed a task outdoors. In this paper, we present an overview and understanding of the platform of Android-based robotic and game structure for research and education.
Humanoid robotics in health care: An exploration of children's and parents' emotional reactions.
Beran, Tanya N; Ramirez-Serrano, Alex; Vanderkooi, Otto G; Kuhn, Susan
2015-07-01
A new non-pharmacological method of distraction was tested with 57 children during their annual flu vaccination. Given children's growing enthusiasm for technological devices, a humanoid robot was programmed to interact with them while a nurse administered the vaccination. Children smiled more often with the robot, as compared to the control condition, but they did not cry less. Parents indicated that their children held stronger memories for the robot than for the needle, wanted the robot in the future, and felt empowered to cope. We conclude that children and their parents respond positively to a humanoid robot at the bedside. © The Author(s) 2013.
Robots for Elderly Care: Their Level of Social Interactions and the Targeted End User.
Bedaf, Sandra; de Witte, Luc
2017-01-01
Robots for older adults have a lot of potential. In order to create an overview of the developments in this area a systematic review of robots for older adults living independently was conducted. Robots were categorized based on their market readiness, the type of provided support (i.e., physical, non-physical, non-specified), and the activity domain they claim to support. Additionally, the commercially available robots are places in a proposed framework to help to distinguish the different types of robots and their focus. During the presentation an updated version of the state of the art will be presented.
NASA Astrophysics Data System (ADS)
Bartolozzi, Chiara; Natale, Lorenzo; Nori, Francesco; Metta, Giorgio
2016-09-01
Tactile sensors provide robots with the ability to interact with humans and the environment with great accuracy, yet technical challenges remain for electronic-skin systems to reach human-level performance.
How to make an autonomous robot as a partner with humans: design approach versus emergent approach.
Fujita, M
2007-01-15
In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.
Some foundational aspects of quantum computers and quantum robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.; Physics
1998-01-01
This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less
On the stiffness analysis of a cable driven leg exoskeleton.
Sanjeevi, N S S; Vashista, Vineet
2017-07-01
Robotic systems are being used for gait rehabilitation of patients with neurological disorder. These devices are externally powered to apply external forces on human limbs to assist the leg motion. Patients while walking with these devices adapt their walking pattern in response to the applied forces. The efficacy of a rehabilitation paradigm thus depends on the human-robot interaction. A cable driven leg exoskeleton (CDLE) use actuated cables to apply external joint torques on human leg. Cables are lightweight and flexible but can only be pulled, thus a CDLE requires redundant cables. Redundancy in CDLE can be utilized to appropriately tune a robot's performance. In this work, we present the stiffness analysis of CDLE. Different stiffness performance indices are established to study the role of system parameters in improving the human-robot interaction.
A meta-analysis of factors affecting trust in human-robot interaction.
Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja
2011-10-01
We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.
Adapting GOMS to Model Human-Robot Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, Jill; Scholtz, Jean; Kieras, David
2007-03-09
Human-robot interaction (HRI) has been maturing in tandem with robots’ commercial success. In the last few years HRI researchers have been adopting—and sometimes adapting—human-computer interaction (HCI) evaluation techniques to assess the efficiency and intuitiveness of HRI designs. For example, Adams (2005) used Goal Directed Task Analysis to determine the interaction needs of officers from the Nashville Metro Police Bomb Squad. Scholtz et al. (2004) used Endsley’s (1988) Situation Awareness Global Assessment Technique to determine robotic vehicle supervisors’ awareness of when vehicles were in trouble and thus required closer monitoring or intervention. Yanco and Drury (2004) employed usability testing to determinemore » (among other things) how well a search-andrescue interface supported use by first responders. One set of HCI tools that has so far seen little exploration in the HRI domain, however, is the class of modeling and evaluation techniques known as formal methods.« less
Child-Robot Interactions for Second Language Tutoring to Preschool Children
Vogt, Paul; de Haas, Mirjam; de Jong, Chiara; Baxter, Peta; Krahmer, Emiel
2017-01-01
In this digital age social robots will increasingly be used for educational purposes, such as second language tutoring. In this perspective article, we propose a number of design features to develop a child-friendly social robot that can effectively support children in second language learning, and we discuss some technical challenges for developing these. The features we propose include choices to develop the robot such that it can act as a peer to motivate the child during second language learning and build trust at the same time, while still being more knowledgeable than the child and scaffolding that knowledge in adult-like manner. We also believe that the first impressions children have about robots are crucial for them to build trust and common ground, which would support child-robot interactions in the long term. We therefore propose a strategy to introduce the robot in a safe way to toddlers. Other features relate to the ability to adapt to individual children’s language proficiency, respond contingently, both temporally and semantically, establish joint attention, use meaningful gestures, provide effective feedback and monitor children’s learning progress. Technical challenges we observe include automatic speech recognition (ASR) for children, reliable object recognition to facilitate semantic contingency and establishing joint attention, and developing human-like gestures with a robot that does not have the same morphology humans have. We briefly discuss an experiment in which we investigate how children respond to different forms of feedback the robot can give. PMID:28303094
An overview of the 2009 Fort Hood Robotics Rodeo
NASA Astrophysics Data System (ADS)
Norberg, Seth
2010-04-01
The Robotics Rodeo held from 31 August to 3 September 2009 at Fort Hood, Texas, had three stated goals: educate key decision makers and align the robotics industry; educate Soldiers and developers; and perform a live market survey of the current state of technologies to encourage the development of robotic systems to support operational needs. Both events that comprised the Robotics Rodeo, the Extravaganza and the robotic technology observation, demonstration and discussion (RTOD2) addressed these stated goals. The Extravaganza was designed to foster interaction between the vendors and the visitors who included the media, Soldiers, others in the robotics industry and key decision makers. The RTOD2 allowed the vendors a more private and focused interaction with the subject matter experts teams, this was the forum for the vendors to demonstrate their robotic systems that supported the III Corps operational needs statements that are focused on route clearance, convoy operations, persistent stare, and robotic wingman. While the goals of the Rodeo were achieved, the underlying success from the event is the development of a new business model that is focused on collapsing the current model to get technologies into the hands of our warfighters quicker. This new model takes the real time data collection from the Rodeo, the Warfighter Needs from TRADOC, the emerging requirements from our current engagements, and assistance from industry partners to develop a future Army strategy for the rapid fielding of unmanned systems technologies.
Child-Robot Interactions for Second Language Tutoring to Preschool Children.
Vogt, Paul; de Haas, Mirjam; de Jong, Chiara; Baxter, Peta; Krahmer, Emiel
2017-01-01
In this digital age social robots will increasingly be used for educational purposes, such as second language tutoring. In this perspective article, we propose a number of design features to develop a child-friendly social robot that can effectively support children in second language learning, and we discuss some technical challenges for developing these. The features we propose include choices to develop the robot such that it can act as a peer to motivate the child during second language learning and build trust at the same time, while still being more knowledgeable than the child and scaffolding that knowledge in adult-like manner. We also believe that the first impressions children have about robots are crucial for them to build trust and common ground, which would support child-robot interactions in the long term. We therefore propose a strategy to introduce the robot in a safe way to toddlers. Other features relate to the ability to adapt to individual children's language proficiency, respond contingently, both temporally and semantically, establish joint attention, use meaningful gestures, provide effective feedback and monitor children's learning progress. Technical challenges we observe include automatic speech recognition (ASR) for children, reliable object recognition to facilitate semantic contingency and establishing joint attention, and developing human-like gestures with a robot that does not have the same morphology humans have. We briefly discuss an experiment in which we investigate how children respond to different forms of feedback the robot can give.
Robotic retroperitoneal partial nephrectomy: a step-by-step guide.
Ghani, Khurshid R; Porter, James; Menon, Mani; Rogers, Craig
2014-08-01
To describe a step-by-step guide for successful implementation of the retroperitoneal approach to robotic partial nephrectomy (RPN) PATIENTS AND METHODS: The patient is placed in the flank position and the table fully flexed to increase the space between the 12th rib and iliac crest. Access to the retroperitoneal space is obtained using a balloon-dilating device. Ports include a 12-mm camera port, two 8-mm robotic ports and a 12-mm assistant port placed in the anterior axillary line cephalad to the anterior superior iliac spine, and 7-8 cm caudal to the ipsilateral robotic port. Positioning and port placement strategies for successful technique include: (i) Docking robot directly over the patient's head parallel to the spine; (ii) incision for camera port ≈1.9 cm (1 fingerbreadth) above the iliac crest, lateral to the triangle of Petit; (iii) Seldinger technique insertion of kidney-shaped balloon dilator into retroperitoneal space; (iv) Maximising distance between all ports; (v) Ensuring camera arm is placed in the outer part of the 'sweet spot'. The retroperitoneal approach to RPN permits direct access to the renal hilum, no need for bowel mobilisation and excellent visualisation of posteriorly located tumours. © 2014 The Authors. BJU International © 2014 BJU International.
Calderita, Luis Vicente; Manso, Luis J; Bustos, Pablo; Fernández, Fernando; Bandera, Antonio
2014-01-01
Background Neurorehabilitation therapies exploiting the use-dependent plasticity of our neuromuscular system are devised to help patients who suffer from injuries or diseases of this system. These therapies take advantage of the fact that the motor activity alters the properties of our neurons and muscles, including the pattern of their connectivity, and thus their functionality. Hence, a sensor-motor treatment where patients makes certain movements will help them (re)learn how to move the affected body parts. But these traditional rehabilitation processes are usually repetitive and lengthy, reducing motivation and adherence to the treatment, and thus limiting the benefits for the patients. Objective Our goal was to create innovative neurorehabilitation therapies based on THERAPIST, a socially assistive robot. THERAPIST is an autonomous robot that is able to find and execute plans and adapt them to new situations in real-time. The software architecture of THERAPIST monitors and determines the course of action, learns from previous experiences, and interacts with people using verbal and non-verbal channels. THERAPIST can increase the adherence of the patient to the sessions using serious games. Data are recorded and can be used to tailor patient sessions. Methods We hypothesized that pediatric patients would engage better in a therapeutic non-physical interaction with a robot, facilitating the design of new therapies to improve patient motivation. We propose RoboCog, a novel cognitive architecture. This architecture will enhance the effectiveness and time-of-response of complex multi-degree-of-freedom robots designed to collaborate with humans, combining two core elements: a deep and hybrid representation of the current state, own, and observed; and a set of task-dependent planners, working at different levels of abstraction but connected to this central representation through a common interface. Using RoboCog, THERAPIST engages the human partner in an active interactive process. But RoboCog also endows the robot with abilities for high-level planning, monitoring, and learning. Thus, THERAPIST engages the patient through different games or activities, and adapts the session to each individual. Results RoboCog successfully integrates a deliberative planner with a set of modules working at situational or sensorimotor levels. This architecture also allows THERAPIST to deliver responses at a human rate. The synchronization of the multiple interaction modalities results from a unique scene representation or model. THERAPIST is now a socially interactive robot that, instead of reproducing the phrases or gestures that the developers decide, maintains a dialogue and autonomously generate gestures or expressions. THERAPIST is able to play simple games with human partners, which requires humans to perform certain movements, and also to capture the human motion, for later analysis by clinic specialists. Conclusions The initial hypothesis was validated by our experimental studies showing that interaction with the robot results in highly attentive and collaborative attitudes in pediatric patients. We also verified that RoboCog allows the robot to interact with patients at human rates. However, there remain many issues to overcome. The development of novel hands-off rehabilitation therapies will require the intersection of multiple challenging directions of research that we are currently exploring. PMID:28582242
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
What does the literature say about using robots on children with disabilities?
Miguel Cruz, Antonio; Ríos Rincón, Adriana María; Rodríguez Dueñas, William Ricardo; Quiroga Torres, Daniel Alejandro; Bohórquez-Heredia, Andrés Felipe
2017-07-01
The purpose of this study is to examine the extent and type of robots used for the rehabilitation and education of children and young people with CP and ASD and the associated outcomes. The scholarly literature was systematically searched and analyzed. Articles were included if they reported the results of robots used or intended to be used for the rehabilitation and education of children and young people with CP and ASD during play and educative and social interaction activities. We found 15 robotic systems reported in 34 studies that provided a low level of evidence. The outcomes were mainly for children with ASD interaction and who had a reduction in autistic behaviour, and for CP cognitive development, learning, and play. More research is needed in this area using designs that provide higher validity. A centred design approach is needed for developing new low-cost robots for this population. Implications for rehabilitation In spite of the potential of robots to promote development in children with ASD and CP, the limited available evidence requires researchers to conduct studies with higher validity. The low level of evidence plus the need for specialized technical support should be considered critical factors before making the decision to purchase robots for use in treatment for children with CP and ASD. A user-entered design approach would increase the chances of success for robots to improve functional, learning, and educative outcomes in children with ASD and CP. We recommend that developers use this approach. The participation of interdisciplinary teams in the design, development, and implementation of new robotic systems is of extra value. We recommend the design and development of low-cost robotic systems to make robots more affordable.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.
A Novel Concept for Safe, Stiffness-Controllable Robot Links.
Stilli, Agostino; Wurdemann, Helge A; Althoefer, Kaspar
2017-03-01
The recent decade has seen an astounding increase of interest and advancement in a new field of robotics, aimed at creating structures specifically for the safe interaction with humans. Softness, flexibility, and variable stiffness in robotics have been recognized as highly desirable characteristics for many applications. A number of solutions were proposed ranging from entirely soft robots (such as those composed mainly from soft materials such as silicone), via flexible continuum and snake-like robots, to rigid-link robots enhanced by joints that exhibit an elastic behavior either implemented in hardware or achieved purely by means of intelligent control. Although these are very good solutions paving the path to safe human-robot interaction, we propose here a new approach that focuses on creating stiffness controllability for the linkages between the robot joints. This article proposes a replacement for the traditionally rigid robot link-the new link is equipped with an additional capability of stiffness controllability. With this added feature, a robot can accurately carry out manipulation tasks (high stiffness), but can virtually instantaneously reduce its stiffness when a human is nearby or in contact with the robot. The key point of the invention described here is a robot link made of an airtight chamber formed by a soft and flexible, but high-strain resistant combination of a plastic mesh and silicone wall. Inflated with air to a high pressure, the mesh silicone chamber behaves like a rigid link; reducing the air pressure, softens the link and rendering the robot structure safe. This article investigates a number of link prototypes and shows the feasibility of the new concept. Stiffness tests have been performed, showing that a significant level of stiffness can be achieved-up to 40 N reaction force along the axial direction, for a 25-mm-diameter sample at 60 kPa, at an axial deformation of 5 mm. The results confirm that this novel concept to linkages for robot manipulators exhibits the beam-like behavior of traditional rigid links when fully pressurized and significantly reduced stiffness at low pressure. The proposed concept has the potential to easily create safe robots, augmenting traditional robot designs.
Multi-Modal Interaction for Robotic Mules
2014-02-26
Multi-Modal Interaction for Robotic Mules Glenn Taylor, Mike Quist, Matt Lanting, Cory Dunham , Patrick Theisen, Paul Muench Abstract...Taylor, Mike Quist, Matt Lanting, Cory Dunham , and Patrick Theisen are with Soar Technology, Inc. (corresponding author: 734-887- 7620; email: glenn...soartech.com; quist@soartech.com; matt.lanting@soartech.com; dunham @soartech.com; patrick.theisen@soartech.com Paul Muench is with US Army TARDEC
Interactive-rate Motion Planning for Concentric Tube Robots
Torres, Luis G.; Baykal, Cenk; Alterovitz, Ron
2014-01-01
Concentric tube robots may enable new, safer minimally invasive surgical procedures by moving along curved paths to reach difficult-to-reach sites in a patient’s anatomy. Operating these devices is challenging due to their complex, unintuitive kinematics and the need to avoid sensitive structures in the anatomy. In this paper, we present a motion planning method that computes collision-free motion plans for concentric tube robots at interactive rates. Our method’s high speed enables a user to continuously and freely move the robot’s tip while the motion planner ensures that the robot’s shaft does not collide with any anatomical obstacles. Our approach uses a highly accurate mechanical model of tube interactions, which is important since small movements of the tip position may require large changes in the shape of the device’s shaft. Our motion planner achieves its high speed and accuracy by combining offline precomputation of a collision-free roadmap with online position control. We demonstrate our interactive planner in a simulated neurosurgical scenario where a user guides the robot’s tip through the environment while the robot automatically avoids collisions with the anatomical obstacles. PMID:25436176
Analyzing Robotic Kinematics Via Computed Simulations
NASA Technical Reports Server (NTRS)
Carnahan, Timothy M.
1992-01-01
Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.
1992-10-29
These people try to make their robotic vehicle as intelligent and autonomous as possible with the current state of technology. The robot only interacts... Robotics Peter J. Burt David Sarnoff Research Center Princeton, NJ 08543-5300 U.S.A. The ability of an operator to drive a remotely piloted vehicle depends...RESUPPLY - System which can rapidly and autonomously load and unload palletized ammunition. (18) AUTONOMOUS COMBAT EVACUATION VEHICLE - Robotic arms
An intelligent robotic aid system for human services
NASA Technical Reports Server (NTRS)
Kawamura, K.; Bagchi, S.; Iskarous, M.; Pack, R. T.; Saad, A.
1994-01-01
The long term goal of our research at the Intelligent Robotic Laboratory at Vanderbilt University is to develop advanced intelligent robotic aid systems for human services. As a first step toward our goal, the current thrusts of our R&D are centered on the development of an intelligent robotic aid called the ISAC (Intelligent Soft Arm Control). In this paper, we describe the overall system architecture and current activities in intelligent control, adaptive/interactive control and task learning.
Lai, Ying-Chih; Deng, Jianan; Liu, Ruiyuan; Hsiao, Yung-Chi; Zhang, Steven L; Peng, Wenbo; Wu, Hsing-Mei; Wang, Xingfu; Wang, Zhong Lin
2018-06-04
Robots that can move, feel, and respond like organisms will bring revolutionary impact to today's technologies. Soft robots with organism-like adaptive bodies have shown great potential in vast robot-human and robot-environment applications. Developing skin-like sensory devices allows them to naturally sense and interact with environment. Also, it would be better if the capabilities to feel can be active, like real skin. However, challenges in the complicated structures, incompatible moduli, poor stretchability and sensitivity, large driving voltage, and power dissipation hinder applicability of conventional technologies. Here, various actively perceivable and responsive soft robots are enabled by self-powered active triboelectric robotic skins (tribo-skins) that simultaneously possess excellent stretchability and excellent sensitivity in the low-pressure regime. The tribo-skins can actively sense proximity, contact, and pressure to external stimuli via self-generating electricity. The driving energy comes from a natural triboelectrification effect involving the cooperation of contact electrification and electrostatic induction. The perfect integration of the tribo-skins and soft actuators enables soft robots to perform various actively sensing and interactive tasks including actively perceiving their muscle motions, working states, textile's dampness, and even subtle human physiological signals. Moreover, the self-generating signals can drive optoelectronic devices for visual communication and be processed for diverse sophisticated uses. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Thellman, Sam; Silvervarg, Annika; Ziemke, Tom
2017-01-01
People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.
Forming Human-Robot Teams Across Time and Space
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.
2012-01-01
NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot, so it involves both internal fault detection as well as force sensing for unintended external contacts. The designs for the supervisory command mode and the redundant safety system will be described. Specific implementations were developed and test results will be reported. Experiments were conducted using terrestrial analogs for deep space missions, where time delays were artificially added to emulate the longer distances found in space.
Tool for Experimenting with Concepts of Mobile Robotics as Applied to Children's Education
ERIC Educational Resources Information Center
Jimenez Jojoa, E. M.; Bravo, E. C.; Bacca Cortes, E. B.
2010-01-01
This paper describes the design and implementation of a tool for experimenting with mobile robotics concepts, primarily for use by children and teenagers, or by the general public, without previous experience in robotics. This tool helps children learn about science in an approachable and interactive way, using scientific research principles in…
Robotic Technology: An Assessment and Forecast,
1984-07-01
Research Associates# Inc. Dr. Roger Nagel# Lehigh University Dr. Charles Rosen# Machine Intelligence Corporations and Mr. Jack Thornton# Robot Insider...amr (Subcontractors: systems for assembly and Adopt Technology# inspection Stanford University. SRI) AFSC MANTECH o McDonnell Douglas o Machine ...supervisory controls man- machine interaction and system integration. - .. _ - Foreign R& The U.S. faces a strong technological challenge in robotics from
Robot Lies in Health Care: When Is Deception Morally Permissible?
Matthias, Andreas
2015-06-01
Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Rare Neural Correlations Implement Robotic Conditioning with Delayed Rewards and Disturbances
Soltoggio, Andrea; Lemme, Andre; Reinhart, Felix; Steil, Jochen J.
2013-01-01
Neural conditioning associates cues and actions with following rewards. The environments in which robots operate, however, are pervaded by a variety of disturbing stimuli and uncertain timing. In particular, variable reward delays make it difficult to reconstruct which previous actions are responsible for following rewards. Such an uncertainty is handled by biological neural networks, but represents a challenge for computational models, suggesting the lack of a satisfactory theory for robotic neural conditioning. The present study demonstrates the use of rare neural correlations in making correct associations between rewards and previous cues or actions. Rare correlations are functional in selecting sparse synapses to be eligible for later weight updates if a reward occurs. The repetition of this process singles out the associating and reward-triggering pathways, and thereby copes with distal rewards. The neural network displays macro-level classical and operant conditioning, which is demonstrated in an interactive real-life human-robot interaction. The proposed mechanism models realistic conditioning in humans and animals and implements similar behaviors in neuro-robotic platforms. PMID:23565092
Using mixed-initiative human-robot interaction to bound performance in a search task
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; Douglas A. Few; Devin S. Athey
2008-12-01
Mobile robots are increasingly used in dangerous domains, because they can keep humans out of harm’s way. Despite their advantages in hazardous environments, their general acceptance in other less dangerous domains has not been apparent and, even in dangerous environments, robots are often viewed as a “last-possible choice.” In order to increase the utility and acceptance of robots in hazardous domains researchers at the Idaho National Laboratory have both developed and tested novel mixed-initiative solutions that support the human-robot interactions. In a recent “dirty-bomb” experiment, participants exhibited different search strategies making it difficult to determine any performance benefits. This papermore » presents a method for categorizing the search patterns and shows that the mixed-initiative solution decreased the time to complete the task and decreased the performance spread between participants independent of prior training and of individual strategies used to accomplish the task.« less
Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain
2015-01-01
In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning. PMID:26485148
Mobile app for human-interaction with sitter robots
NASA Astrophysics Data System (ADS)
Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.
2017-05-01
Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.
An interactive online robotics course.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wedeward, Kevin; Bruder, Steven B. H.
Attempting to convey concepts and ideas in the subject area of robotic manipulators from within the confines of a static two-dimensional printed page can prove quite challenging to even the most gifted of authors. The inherently dynamic and multi-dimensional nature of the subject matter seems better suited to a medium of conveyance wherein a student is allowed to interactively explore topics in this multi-disciplinary field. This article describes the initial development of an online robotics course 'textbook' which seeks to leverage recent advances in Web-based technologies to enhance the learning experience in ways not possible with printed materials. The pedagogicalmore » approach employed herein is that of multi-modal reinforcement wherein key concepts are first described in words, conveyed visually, and finally reinforced by soliciting student interaction.« less
Intelligent robot trends and predictions for the .net future
NASA Astrophysics Data System (ADS)
Hall, Ernest L.
2001-10-01
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent and future technical and economic trends. During the past twenty years the use of industrial robots that are equipped not only with precise motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. Intelligent robot products have been developed in many cases for factory automation and for some hospital and home applications. To reach an even higher degree of applications, the addition of learning may be required. Recently, learning theories such as the adaptive critic have been proposed. In this type of learning, a critic provides a grade to the controller of an action module such as a robot. The adaptive critic is a good model for human learning. In general, the critic may be considered to be the human with the teach pendant, plant manager, line supervisor, quality inspector or the consumer. If the ultimate critic is the consumer, then the quality inspector must model the consumer's decision-making process and use this model in the design and manufacturing operations. Can the adaptive critic be used to advance intelligent robots? Intelligent robots have historically taken decades to be developed and reduced to practice. Methods for speeding this development include technology such as rapid prototyping and product development and government, industry and university cooperation.
Improving Emergency Response and Human-Robotic Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
David I. Gertman; David J. Bruemmer; R. Scott Hartley
2007-08-01
Preparedness for chemical, biological, and radiological/nuclear incidents at nuclear power plants (NPPs) includes the deployment of well trained emergency response teams. While teams are expected to do well, data from other domains suggests that the timeliness and accuracy associated with incident response can be improved through collaborative human-robotic interaction. Many incident response scenarios call for multiple, complex procedure-based activities performed by personnel wearing cumbersome personal protective equipment (PPE) and operating under high levels of stress and workload. While robotic assistance is postulated to reduce workload and exposure, limitations associated with communications and the robot’s ability to act independently have servedmore » to limit reliability and reduce our potential to exploit human –robotic interaction and efficacy of response. Recent work at the Idaho National Laboratory (INL) on expanding robot capability has the potential to improve human-system response during disaster management and recovery. Specifically, increasing the range of higher level robot behaviors such as autonomous navigation and mapping, evolving new abstractions for sensor and control data, and developing metaphors for operator control have the potential to improve state-of-the-art in incident response. This paper discusses these issues and reports on experiments underway intelligence residing on the robot to enhance emergency response.« less
Adaptive walking of a quadrupedal robot based on layered biological reflexes
NASA Astrophysics Data System (ADS)
Zhang, Xiuli; Mingcheng, E.; Zeng, Xiangyu; Zheng, Haojun
2012-07-01
A multiple-legged robot is traditionally controlled by using its dynamic model. But the dynamic-model-based approach fails to acquire satisfactory performances when the robot faces rough terrains and unknown environments. Referring animals' neural control mechanisms, a control model is built for a quadruped robot walking adaptively. The basic rhythmic motion of the robot is controlled by a well-designed rhythmic motion controller(RMC) comprising a central pattern generator(CPG) for hip joints and a rhythmic coupler (RC) for knee joints. CPG and RC have relationships of motion-mapping and rhythmic couple. Multiple sensory-motor models, abstracted from the neural reflexes of a cat, are employed. These reflex models are organized and thus interact with the CPG in three layers, to meet different requirements of complexity and response time to the tasks. On the basis of the RMC and layered biological reflexes, a quadruped robot is constructed, which can clear obstacles and walk uphill and downhill autonomously, and make a turn voluntarily in uncertain environments, interacting with the environment in a way similar to that of an animal. The paper provides a biologically inspired architecture, with which a robot can walk adaptively in uncertain environments in a simple and effective way, and achieve better performances.
NASA Astrophysics Data System (ADS)
See, Swee Lan; Tan, Mitchell; Looi, Qin En
This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.
Socialization between toddlers and robots at an early childhood education center
Tanaka, Fumihide; Cicourel, Aaron; Movellan, Javier R.
2007-01-01
A state-of-the-art social robot was immersed in a classroom of toddlers for >5 months. The quality of the interaction between children and robots improved steadily for 27 sessions, quickly deteriorated for 15 sessions when the robot was reprogrammed to behave in a predictable manner, and improved in the last three sessions when the robot displayed again its full behavioral repertoire. Initially, the children treated the robot very differently than the way they treated each other. By the last sessions, 5 months later, they treated the robot as a peer rather than as a toy. Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time and that it could have great potential in educational settings assisting teachers and enriching the classroom environment. PMID:17984068
Vocal emotion of humanoid robots: a study from brain mechanism.
Wang, Youhui; Hu, Xiaohua; Dai, Weihui; Zhou, Jie; Kuo, Taitzong
2014-01-01
Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.
Customized Interactive Robotic Treatment for Stroke: EMG-Triggered Therapy
Dipietro, Laura; Ferraro, Mark; Palazzolo, Jerome Joseph; Krebs, Hermano Igo; Volpe, Bruce T.; Hogan, Neville
2009-01-01
A system for electromyographic (EMG) triggering of robot-assisted therapy (dubbed the EMG game) for stroke patients is presented. The onset of a patient’s attempt to move is detected by monitoring EMG in selected muscles, whereupon the robot assists her or him to perform point-to-point movements in a horizontal plane. Besides delivering customized robot-assisted therapy, the system can record signals that may be useful to better understand the process of recovery from stroke. Preliminary experiments aimed at testing the proposed system and gaining insight into the potential of EMG-triggered, robot-assisted therapy are reported. PMID:16200756
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
Path planning algorithms for assembly sequence planning. [in robot kinematics
NASA Technical Reports Server (NTRS)
Krishnan, S. S.; Sanderson, Arthur C.
1991-01-01
Planning for manipulation in complex environments often requires reasoning about the geometric and mechanical constraints which are posed by the task. In planning assembly operations, the automatic generation of operations sequences depends on the geometric feasibility of paths which permit parts to be joined into subassemblies. Feasible locations and collision-free paths must be present for part motions, robot and grasping motions, and fixtures. This paper describes an approach to reasoning about the feasibility of straight-line paths among three-dimensional polyhedral parts using an algebra of polyhedral cones. A second method recasts the feasibility conditions as constraints in a nonlinear optimization framework. Both algorithms have been implemented and results are presented.
Telemedicine: An expanding new science on land and sea
NASA Technical Reports Server (NTRS)
Jackman, K. R.; Rippo, A. J.
1977-01-01
Several medical and technical men in San Diego County are concerned with the need in many rural communities for a 24-hour day, 7-days a week access to adequate medical care. People isolated from urban areas by travel-times of 40-minutes tend to delay seeking early and effective medical care. The authors were able to assemble quality technology which permits narrow-band video-pictures, better known in the CB trade as ROBOT slow-scan television (SSTV), to be transmitted over telephone lines, by micro-wave, through satellite-bounce, or by HF-radio. These 'ROBOT' pictures can be accompanied with explanatory audio communication and with diagnostic signals from electronic instruments.
A generalized method for multiple robotic manipulator programming applied to vertical-up welding
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Andersen, Kristinn; Barnett, Robert Joel; Zein-Sabattou, Saleh
1991-01-01
The application is described of a weld programming algorithm for vertical-up welding, which is frequently desired for variable polarity plasma arc welding (VPPAW). The Basic algorithm performs three tasks simultaneously: control of the robotic mechanism so that proper torch motion is achieved while minimizing the sum-of-squares of joint displacement; control of the torch while the part is maintained in a desirable orientation; and control of the wire feed mechanism location with respect to the moving welding torch. Also presented is a modification of this algorithm which permits it to be used for vertical-up welding. The details of this modification are discussed and simulation examples are provided for illustration and verification.
Rasheed, Nadia; Amin, Shamsudin H M
2016-01-01
Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue.
Rasheed, Nadia; Amin, Shamsudin H. M.
2016-01-01
Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue. PMID:27069470
Soft Robotic Grippers for Biological Sampling on Deep Reefs.
Galloway, Kevin C; Becker, Kaitlyn P; Phillips, Brennan; Kirby, Jordan; Licht, Stephen; Tchernov, Dan; Wood, Robert J; Gruber, David F
2016-03-01
This article presents the development of an underwater gripper that utilizes soft robotics technology to delicately manipulate and sample fragile species on the deep reef. Existing solutions for deep sea robotic manipulation have historically been driven by the oil industry, resulting in destructive interactions with undersea life. Soft material robotics relies on compliant materials that are inherently impedance matched to natural environments and to soft or fragile organisms. We demonstrate design principles for soft robot end effectors, bench-top characterization of their grasping performance, and conclude by describing in situ testing at mesophotic depths. The result is the first use of soft robotics in the deep sea for the nondestructive sampling of benthic fauna.
Use of symbolic computation in robotics education
NASA Technical Reports Server (NTRS)
Vira, Naren; Tunstel, Edward
1992-01-01
An application of symbolic computation in robotics education is described. A software package is presented which combines generality, user interaction, and user-friendliness with the systematic usage of symbolic computation and artificial intelligence techniques. The software utilizes MACSYMA, a LISP-based symbolic algebra language, to automatically generate closed-form expressions representing forward and inverse kinematics solutions, the Jacobian transformation matrices, robot pose error-compensation models equations, and Lagrange dynamics formulation for N degree-of-freedom, open chain robotic manipulators. The goal of such a package is to aid faculty and students in the robotics course by removing burdensome tasks of mathematical manipulations. The software package has been successfully tested for its accuracy using commercially available robots.
Soft Robotic Grippers for Biological Sampling on Deep Reefs
Galloway, Kevin C.; Becker, Kaitlyn P.; Phillips, Brennan; Kirby, Jordan; Licht, Stephen; Tchernov, Dan; Gruber, David F.
2016-01-01
Abstract This article presents the development of an underwater gripper that utilizes soft robotics technology to delicately manipulate and sample fragile species on the deep reef. Existing solutions for deep sea robotic manipulation have historically been driven by the oil industry, resulting in destructive interactions with undersea life. Soft material robotics relies on compliant materials that are inherently impedance matched to natural environments and to soft or fragile organisms. We demonstrate design principles for soft robot end effectors, bench-top characterization of their grasping performance, and conclude by describing in situ testing at mesophotic depths. The result is the first use of soft robotics in the deep sea for the nondestructive sampling of benthic fauna. PMID:27625917
Design Of An Intelligent Robotic System Organizer Via Expert System Tecniques
NASA Astrophysics Data System (ADS)
Yuan, Peter H.; Valavanis, Kimon P.
1989-02-01
Intelligent Robotic Systems are a special type of Intelligent Machines. When modeled based on Vle theory of Intelligent Controls, they are composed of three interactive levels, namely: organization, coordination, and execution, ordered according, to the ,Principle of Increasing, Intelligence with Decreasing Precl.sion. Expert System techniques, are used to design an Intelligent Robotic System Organizer with a dynamic Knowledge Base and an interactive Inference Engine. Task plans are formulated using, either or both of a Probabilistic Approach and Forward Chapling Methodology, depending on pertinent information associated with a spec;fic requested job. The Intelligent Robotic System, Organizer is implemented and tested on a prototype system operating in an uncertain environment. An evaluation of-the performance, of the prototype system is conducted based upon the probability of generating a successful task sequence versus the number of trials taken by the organizer.
Adaptive Control Strategies for Interlimb Coordination in Legged Robots: A Review
Aoi, Shinya; Manoonpong, Poramate; Ambe, Yuichi; Matsuno, Fumitoshi; Wörgötter, Florentin
2017-01-01
Walking animals produce adaptive interlimb coordination during locomotion in accordance with their situation. Interlimb coordination is generated through the dynamic interactions of the neural system, the musculoskeletal system, and the environment, although the underlying mechanisms remain unclear. Recently, investigations of the adaptation mechanisms of living beings have attracted attention, and bio-inspired control systems based on neurophysiological findings regarding sensorimotor interactions are being developed for legged robots. In this review, we introduce adaptive interlimb coordination for legged robots induced by various factors (locomotion speed, environmental situation, body properties, and task). In addition, we show characteristic properties of adaptive interlimb coordination, such as gait hysteresis and different time-scale adaptations. We also discuss the underlying mechanisms and control strategies to achieve adaptive interlimb coordination and the design principle for the control system of legged robots. PMID:28878645
Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.
Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela
2015-12-01
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Middleware Design for Swarm-Driving Robots Accompanying Humans.
Kim, Min Su; Kim, Sang Hyuck; Kang, Soon Ju
2017-02-17
Research on robots that accompany humans is being continuously studied. The Pet-Bot provides walking-assistance and object-carrying services without any specific controls through interaction between the robot and the human in real time. However, with Pet-Bot, there is a limit to the number of robots a user can use. If this limit is overcome, the Pet-Bot can provide services in more areas. Therefore, in this study, we propose a swarm-driving middleware design adopting the concept of a swarm, which provides effective parallel movement to allow multiple human-accompanying robots to accomplish a common purpose. The functions of middleware divide into three parts: a sequence manager for swarm process, a messaging manager, and a relative-location identification manager. This middleware processes the sequence of swarm-process of robots in the swarm through message exchanging using radio frequency (RF) communication of an IEEE 802.15.4 MAC protocol and manages an infrared (IR) communication module identifying relative location with IR signal strength. The swarm in this study is composed of the master interacting with the user and the slaves having no interaction with the user. This composition is intended to control the overall swarm in synchronization with the user activity, which is difficult to predict. We evaluate the accuracy of the relative-location estimation using IR communication, the response time of the slaves to a change in user activity, and the time to organize a network according to the number of slaves.
Middleware Design for Swarm-Driving Robots Accompanying Humans
Kim, Min Su; Kim, Sang Hyuck; Kang, Soon Ju
2017-01-01
Research on robots that accompany humans is being continuously studied. The Pet-Bot provides walking-assistance and object-carrying services without any specific controls through interaction between the robot and the human in real time. However, with Pet-Bot, there is a limit to the number of robots a user can use. If this limit is overcome, the Pet-Bot can provide services in more areas. Therefore, in this study, we propose a swarm-driving middleware design adopting the concept of a swarm, which provides effective parallel movement to allow multiple human-accompanying robots to accomplish a common purpose. The functions of middleware divide into three parts: a sequence manager for swarm process, a messaging manager, and a relative-location identification manager. This middleware processes the sequence of swarm-process of robots in the swarm through message exchanging using radio frequency (RF) communication of an IEEE 802.15.4 MAC protocol and manages an infrared (IR) communication module identifying relative location with IR signal strength. The swarm in this study is composed of the master interacting with the user and the slaves having no interaction with the user. This composition is intended to control the overall swarm in synchronization with the user activity, which is difficult to predict. We evaluate the accuracy of the relative-location estimation using IR communication, the response time of the slaves to a change in user activity, and the time to organize a network according to the number of slaves. PMID:28218650
So, Wing-Chee; Wong, Miranda Kit-Yi; Lam, Carrie Ka-Yee; Lam, Wan-Yi; Chui, Anthony Tsz-Fung; Lee, Tsz-Lok; Ng, Hoi-Man; Chan, Chun-Hung; Fok, Daniel Chun-Wing
2017-07-04
While it has been argued that children with autism spectrum disorders are responsive to robot-like toys, very little research has examined the impact of robot-based intervention on gesture use. These children have delayed gestural development. We used a social robot in two phases to teach them to recognize and produce eight pantomime gestures that expressed feelings and needs. Compared to the children in the wait-list control group (N = 6), those in the intervention group (N = 7) were more likely to recognize gestures and to gesture accurately in trained and untrained scenarios. They also generalized the acquired recognition (but not production) skills to human-to-human interaction. The benefits and limitations of robot-based intervention for gestural learning were highlighted. Implications for Rehabilitation Compared to typically-developing children, children with autism spectrum disorders have delayed development of gesture comprehension and production. Robot-based intervention program was developed to teach children with autism spectrum disorders recognition (Phase I) and production (Phase II) of eight pantomime gestures that expressed feelings and needs. Children in the intervention group (but not in the wait-list control group) were able to recognize more gestures in both trained and untrained scenarios and generalize the acquired gestural recognition skills to human-to-human interaction. Similar findings were reported for gestural production except that there was no strong evidence showing children in the intervention group could produce gestures accurately in human-to-human interaction.
Modelling robot construction systems
NASA Technical Reports Server (NTRS)
Grasso, Chris
1990-01-01
TROTER's are small, inexpensive robots that can work together to accomplish sophisticated construction tasks. To understand the issues involved in designing and operating a team of TROTER's, the robots and their components are being modeled. A TROTER system that features standardized component behavior is introduced. An object-oriented model implemented in the Smalltalk programming language is described and the advantages of the object-oriented approach for simulating robot and component interactions are discussed. The presentation includes preliminary results and a discussion of outstanding issues.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.
Common Metrics for Human-Robot Interaction
NASA Technical Reports Server (NTRS)
Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael
2006-01-01
This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.
ERIC Educational Resources Information Center
Wu, Wen-Chi Vivian; Wang, Rong-Jyue; Chen, Nian-Shing
2015-01-01
This paper presents a design for a cutting-edge English program in which elementary school learners of English as a foreign language in Taiwan had lively interactions with a teaching assistant robot. Three dimensions involved in the design included (1) a pleasant and interactive classroom environment as the learning context, (2) a teaching…
Using robots to help people habituate to visible disabilities.
Riek, Laurel D; Robinson, Peter
2011-01-01
We explore a new way of using robots as human-human social facilitators: inter-ability communication. This refers to communication between people with disabilities and those without disabilities. We have interviewed people with head and facial movement disorders (n = 4), and, using a vision-based approach, recreated their movements on our 27 degree-of-freedom android robot. We then conducted an exploratory experiment (n = 26) to see if the robot might serve as a suitable tool to allow people to practice inter-ability interaction on a robot before doing it with a person. Our results suggest a robot may be useful in this manner. Furthermore, we have found a significant relationship between people who hold negative attitudes toward robots and negative attitudes toward people with disabilities. © 2011 IEEE
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
ERIC Educational Resources Information Center
Howard, A. M.; Park, Chung Hyuk; Remy, S.
2012-01-01
The robotics field represents the integration of multiple facets of computer science and engineering. Robotics-based activities have been shown to encourage K-12 students to consider careers in computing and have even been adopted as part of core computer-science curriculum at a number of universities. Unfortunately, for students with visual…
Robot Comedy Lab: experimenting with the social dynamics of live performance
Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias
2015-01-01
The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts. PMID:26379585
KREBS, H.I.; VOLPE, B.T.
2015-01-01
This chapter focuses on rehabilitation robotics which can be used to augment the clinician’s toolbox in order to deliver meaningful restorative therapy for an aging population, as well as on advances in orthotics to augment an individual’s functional abilities beyond neurorestoration potential. The interest in rehabilitation robotics and orthotics is increasing steadily with marked growth in the last 10 years. This growth is understandable in view of the increased demand for caregivers and rehabilitation services escalating apace with the graying of the population. We will provide an overview on improving function in people with a weak limb due to a neurological disorder who cannot properly control it to interact with the environment (orthotics); we will then focus on tools to assist the clinician in promoting rehabilitation of an individual so that s/he can interact with the environment unassisted (rehabilitation robotics). We will present a few clinical results occurring immediately poststroke as well as during the chronic phase that demonstrate superior gains for the upper extremity when employing rehabilitation robotics instead of usual care. These include the landmark VA-ROBOTICS multisite, randomized clinical study which demonstrates clinical gains for chronic stroke that go beyond usual care at no additional cost. PMID:23312648
Design and control of compliant tensegrity robots through simulation and hardware validation
Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas
2014-01-01
To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292
Krebs, H I; Volpe, B T
2013-01-01
This chapter focuses on rehabilitation robotics which can be used to augment the clinician's toolbox in order to deliver meaningful restorative therapy for an aging population, as well as on advances in orthotics to augment an individual's functional abilities beyond neurorestoration potential. The interest in rehabilitation robotics and orthotics is increasing steadily with marked growth in the last 10 years. This growth is understandable in view of the increased demand for caregivers and rehabilitation services escalating apace with the graying of the population. We provide an overview on improving function in people with a weak limb due to a neurological disorder who cannot properly control it to interact with the environment (orthotics); we then focus on tools to assist the clinician in promoting rehabilitation of an individual so that s/he can interact with the environment unassisted (rehabilitation robotics). We present a few clinical results occurring immediately poststroke as well as during the chronic phase that demonstrate superior gains for the upper extremity when employing rehabilitation robotics instead of usual care. These include the landmark VA-ROBOTICS multisite, randomized clinical study which demonstrates clinical gains for chronic stroke that go beyond usual care at no additional cost. Copyright © 2013 Elsevier B.V. All rights reserved.
Quantum robots plus environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-07-23
A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less
NASA Astrophysics Data System (ADS)
Thomaz, Andrea; Breazeal, Cynthia
2008-06-01
We present a learning system, socially guided exploration, in which a social robot learns new tasks through a combination of self-exploration and social interaction. The system's motivational drives, along with social scaffolding from a human partner, bias behaviour to create learning opportunities for a hierarchical reinforcement learning mechanism. The robot is able to learn on its own, but can flexibly take advantage of the guidance of a human teacher. We report the results of an experiment that analyses what the robot learns on its own as compared to being taught by human subjects. We also analyse the video of these interactions to understand human teaching behaviour and the social dynamics of the human-teacher/robot-learner system. With respect to learning performance, human guidance results in a task set that is significantly more focused and efficient at the tasks the human was trying to teach, whereas self-exploration results in a more diverse set. Analysis of human teaching behaviour reveals insights of social coupling between the human teacher and robot learner, different teaching styles, strong consistency in the kinds and frequency of scaffolding acts across teachers and nuances in the communicative intent behind positive and negative feedback.
Robotic Technology Development at Ames: The Intelligent Robotics Group and Surface Telerobotics
NASA Technical Reports Server (NTRS)
Bualat, Maria; Fong, Terrence
2013-01-01
Future human missions to the Moon, Mars, and other destinations offer many new opportunities for exploration. But, astronaut time will always be limited and some work will not be feasible for humans to do manually. Robots, however, can complement human explorers, performing work autonomously or under remote supervision from Earth. Since 2004, the Intelligent Robotics Group has been working to make human-robot interaction efficient and effective for space exploration. A central focus of our research has been to develop and field test robots that benefit human exploration. Our approach is inspired by lessons learned from the Mars Exploration Rovers, as well as human spaceflight programs, including Apollo, the Space Shuttle, and the International Space Station. We conduct applied research in computer vision, geospatial data systems, human-robot interaction, planetary mapping and robot software. In planning for future exploration missions, architecture and study teams have made numerous assumptions about how crew can be telepresent on a planetary surface by remotely operating surface robots from space (i.e. from a flight vehicle or deep space habitat). These assumptions include estimates of technology maturity, existing technology gaps, and likely operational and functional risks. These assumptions, however, are not grounded by actual experimental data. Moreover, no crew-controlled surface telerobotic system has yet been fully tested, or rigorously validated, through flight testing. During Summer 2013, we conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover across short time delays. The tests simulated portions of a proposed human-robotic Lunar Waypoint mission, in which astronauts in lunar orbit remotely operate a planetary rover on the lunar Farside to deploy a radio telescope array. We used these tests to obtain baseline-engineering data.
The psychosocial effects of a companion robot: a randomized controlled trial.
Robinson, Hayley; Macdonald, Bruce; Kerse, Ngaire; Broadbent, Elizabeth
2013-09-01
To investigate the psychosocial effects of the companion robot, Paro, in a rest home/hospital setting in comparison to a control group. Randomized controlled trial. Residents were randomized to the robot intervention group or a control group that attended normal activities instead of Paro sessions. Sessions took place twice a week for an hour over 12 weeks. Over the trial period, observations were conducted of residents' social behavior when interacting as a group with the robot. As a comparison, observations were also conducted of all the residents during general activities when the resident dog was or was not present. A residential care facility in Auckland, New Zealand. Forty residents in hospital and rest home care. Residents completed a baseline measure assessing cognitive status, loneliness, depression, and quality of life. At follow-up, residents completed a questionnaire assessing loneliness, depression, and quality of life. During observations, behavior was noted and collated for instances of talking and stroking the dog/robot. In comparison with the control group, residents who interacted with the robot had significant decreases in loneliness over the period of the trial. Both the resident dog and the seal robot made an impact on the social environment in comparison to when neither was present. Residents talked to and touched the robot significantly more than the resident dog. A greater number of residents were involved in discussion about the robot in comparison with the resident dog and conversation about the robot occurred more. Paro is a positive addition to this environment and has benefits for older people in nursing home care. Paro may be able to address some of the unmet needs of older people that a resident animal may not, particularly relating to loneliness. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.
Meeting the challenges of installing a mobile robotic system
NASA Technical Reports Server (NTRS)
Decorte, Celeste
1994-01-01
The challenges of integrating a mobile robotic system into an application environment are many. Most problems inherent to installing the mobile robotic system fall into one of three categories: (1) the physical environment - location(s) where, and conditions under which, the mobile robotic system will work; (2) the technological environment - external equipment with which the mobile robotic system will interact; and (3) the human environment - personnel who will operate and interact with the mobile robotic system. The successful integration of a mobile robotic system into these three types of application environment requires more than a good pair of pliers. The tools for this job include: careful planning, accurate measurement data (as-built drawings), complete technical data of systems to be interfaced, sufficient time and attention of key personnel for training on how to operate and program the robot, on-site access during installation, and a thorough understanding and appreciation - by all concerned - of the mobile robotic system's role in the security mission at the site, as well as the machine's capabilities and limitations. Patience, luck, and a sense of humor are also useful tools to keep handy during a mobile robotic system installation. This paper will discuss some specific examples of problems in each of three categories, and explore approaches to solving these problems. The discussion will draw from the author's experience with on-site installations of mobile robotic systems in various applications. Most of the information discussed in this paper has come directly from knowledge learned during installations of Cybermotion's SR2 security robots. A large part of the discussion will apply to any vehicle with a drive system, collision avoidance, and navigation sensors, which is, of course, what makes a vehicle autonomous. And it is with these sensors and a drive system that the installer must become familiar in order to foresee potential trouble areas in the physical, technical, and human environment.
Motor contagion during human-human and human-robot interaction.
Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry
2014-01-01
Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.
Motor Contagion during Human-Human and Human-Robot Interaction
Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry
2014-01-01
Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990
Information-theoretic decomposition of embodied and situated systems.
Da Rold, Federico
2018-07-01
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Piezoresistive pressure sensor array for robotic skin
NASA Astrophysics Data System (ADS)
Mirza, Fahad; Sahasrabuddhe, Ritvij R.; Baptist, Joshua R.; Wijesundara, Muthu B. J.; Lee, Woo H.; Popa, Dan O.
2016-05-01
Robots are starting to transition from the confines of the manufacturing floor to homes, schools, hospitals, and highly dynamic environments. As, a result, it is impossible to foresee all the probable operational situations of robots, and preprogram the robot behavior in those situations. Among human-robot interaction technologies, haptic communication is an intuitive physical interaction method that can help define operational behaviors for robots cooperating with humans. Multimodal robotic skin with distributed sensors can help robots increase perception capabilities of their surrounding environments. Electro-Hydro-Dynamic (EHD) printing is a flexible multi-modal sensor fabrication method because of its direct printing capability of a wide range of materials onto substrates with non-uniform topographies. In past work we designed interdigitated comb electrodes as a sensing element and printed piezoresistive strain sensors using customized EHD printable PEDOT:PSS based inks. We formulated a PEDOT:PSS derivative ink, by mixing PEDOT:PSS and DMSO. Bending induced characterization tests of prototyped sensors showed high sensitivity and sufficient stability. In this paper, we describe SkinCells, robot skin sensor arrays integrated with electronic modules. 4x4 EHD-printed arrays of strain sensors was packaged onto Kapton sheets and silicone encapsulant and interconnected to a custom electronic module that consists of a microcontroller, Wheatstone bridge with adjustable digital potentiometer, multiplexer, and serial communication unit. Thus, SkinCell's electronics can be used for signal acquisition, conditioning, and networking between sensor modules. Several SkinCells were loaded with controlled pressure, temperature and humidity testing apparatuses, and testing results are reported in this paper.
Arash: A social robot buddy to support children with cancer in a hospital environment.
Meghdari, Ali; Shariati, Azadeh; Alemi, Minoo; Vossoughi, Gholamreza R; Eydi, Abdollah; Ahmadi, Ehsan; Mozafari, Behrad; Amoozandeh Nobaveh, Ali; Tahami, Reza
2018-06-01
This article presents the thorough design procedure, specifications, and performance of a mobile social robot friend Arash for educational and therapeutic involvement of children with cancer based on their interests and needs. Our research focuses on employing Arash in a pediatric hospital environment to entertain, assist, and educate children with cancer who suffer from physical pain caused by both the disease and its treatment process. Since cancer treatment causes emotional distress, which can reduce the efficiency of medications, using social robots to interact with children with cancer in a hospital environment could decrease this distress, thereby improving the effectiveness of their treatment. Arash is a 15 degree-of-freedom low-cost humanoid mobile robot buddy, carefully designed with appropriate measures and developed to interact with children ages 5-12 years old. The robot has five physical subsystems: the head, arms, torso, waist, and mobile-platform. The robot's final appearance is a significant novel concept; since it was selected based on a survey taken from 50 children with chronic diseases at three pediatric hospitals in Tehran, Iran. Founded on these measures and desires, Arash was designed, built, improved, and enhanced to operate successfully in pediatric cancer hospitals. Two experiments were devised to evaluate the children's level of acceptance and involvement with the robot, assess their feelings about it, and measure how much the robot was similar to the favored conceptual sketch. Both experiments were conducted in the form of storytelling and appearance/performance evaluations. The obtained results confirm high engagement and interest of pediatric cancer patients with the constructed robot.
NASA Technical Reports Server (NTRS)
Spudis, Paul D.; Lucey, Paul G.
1993-01-01
The Clementine mission will provide us with an abundance of information about lunar surface morphology, topography, and composition, and it will permit us to infer the history of the Moon and the processes that have shaped that history. This information can be used to address fundamental questions in lunar science and allow us to make significant advances towards deciphering the complex story of the Moon. The Clementine mission will also permit a first-order global assessment of the resources of the Moon and provide a strategic base of knowledge upon which future robotic and human missions to the Moon can build.
Gergely, Anna; Petró, Eszter; Topál, József; Miklósi, Ádám
2013-01-01
Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs' interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs' interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs' behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs' social responsiveness.
Experientally guided robots. [for planet exploration
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1974-01-01
This paper argues that an experientally guided robot is necessary to successfully explore far-away planets. Such a robot is characterized as having sense organs which receive sensory information from its environment and motor systems which allow it to interact with that environment. The sensori-motor information which it receives is organized into an experiential knowledge structure and this knowledge in turn is used to guide the robot's future actions. A summary is presented of a problem solving system which is being used as a test bed for developing such a robot. The robot currently engages in the behaviors of visual tracking, focusing down, and looking around in a simulated Martian landscape. Finally, some unsolved problems are outlined whose solutions are necessary before an experientally guided robot can be produced. These problems center around organizing the motivational and memory structure of the robot and understanding its high-level control mechanisms.
NASA Astrophysics Data System (ADS)
Billard, Aude
2000-10-01
This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.
Using virtual robot-mediated play activities to assess cognitive skills.
Encarnação, Pedro; Alvarez, Liliana; Rios, Adriana; Maya, Catarina; Adams, Kim; Cook, Al
2014-05-01
To evaluate the feasibility of using virtual robot-mediated play activities to assess cognitive skills. Children with and without disabilities utilized both a physical robot and a matching virtual robot to perform the same play activities. The activities were designed such that successfully performing them is an indication of understanding of the underlying cognitive skills. Participants' performance with both robots was similar when evaluated by the success rates in each of the activities. Session video analysis encompassing participants' behavioral, interaction and communication aspects revealed differences in sustained attention, visuospatial and temporal perception, and self-regulation, favoring the virtual robot. The study shows that virtual robots are a viable alternative to the use of physical robots for assessing children's cognitive skills, with the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support. Virtual robots can provide a vehicle for children to demonstrate cognitive understanding. Virtual and physical robots can be used as augmentative manipulation tools allowing children with disabilities to actively participate in play, educational and therapeutic activities. Virtual robots have the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support.
Interactive multi-objective path planning through a palette-based user interface
NASA Astrophysics Data System (ADS)
Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph
2016-05-01
n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.