NASA Technical Reports Server (NTRS)
Wales, Roxana C.; Shalin, Valerie L.; Bass, Deborah S.
2004-01-01
This paper focuses on the development and use of the abbreviated names as well as an emergent ontology associated with making requests for action of a distant robotic rover during the 2003-2004 NASA Mars Exploration Rover (MER) mission, run by the Jet Propulsion Laboratory. The infancy of the domain of Martian telerobotic science, in which specialists request work from a rover moving through the landscape, as well as the need to consider the interdisciplinary teams involved in the work required an empirical approach. The formulation of this ontology is grounded in human behavior and work practice. The purpose of this paper is to identify general issues for an ontology of action (specifically for requests for action), while maintaining sensitivity to the users, tools and the work system within a specific technical domain. We found that this ontology of action must take into account a dynamic environment, changing in response to the movement of the rover, changes on the rover itself, as well as be responsive to the purposeful intent of the science requestors. Analysis of MER mission events demonstrates that the work practice and even robotic tool usage changes over time. Therefore, an ontology must adapt and represent both incremental change and revolutionary change, and the ontology can never be more than a partial agreement on the conceptualizations involved. Although examined in a rather unique technical domain, the general issues pertain to the control of any complex, distributed work system as well as the archival record of its accomplishments.
Building an environment model using depth information
NASA Technical Reports Server (NTRS)
Roth-Tabak, Y.; Jain, Ramesh
1989-01-01
Modeling the environment is one of the most crucial issues for the development and research of autonomous robot and tele-perception. Though the physical robot operates (navigates and performs various tasks) in the real world, any type of reasoning, such as situation assessment, planning or reasoning about action, is performed based on information in its internal world. Hence, the robot's intentional actions are inherently constrained by the models it has. These models may serve as interfaces between sensing modules and reasoning modules, or in the case of telerobots serve as interface between the human operator and the distant robot. A robot operating in a known restricted environment may have a priori knowledge of its whole possible work domain, which will be assimilated in its World Model. As the information in the World Model is relatively fixed, an Environment Model must be introduced to cope with the changes in the environment and to allow exploring entirely new domains. Introduced here is an algorithm that uses dense range data collected at various positions in the environment to refine and update or generate a 3-D volumetric model of an environment. The model, which is intended for autonomous robot navigation and tele-perception, consists of cubic voxels with the possible attributes: Void, Full, and Unknown. Experimental results from simulations of range data in synthetic environments are given. The quality of the results show great promise for dealing with noisy input data. The performance measures for the algorithm are defined, and quantitative results for noisy data and positional uncertainty are presented.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel
2016-05-25
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel
2018-03-01
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
2017-04-20
This close-up shows Swarmie robots that were programmed with computer code by college and university students. During the Swarmathon competition at the Kennedy Space Center Visitor Complex, the small robots looked for "resources" in the form of cubes with AprilTags, similar to barcodes. Similar robots could help find resources when astronauts explore distant locations, such as the moon or Mars.
2017-04-19
At the Kennedy Space Center Visitor Complex, students monitor progress as their Swarmie robots as they search for "resources." The goal is for the robots to pick up cubes with AprilTags, which are similar to bar codes. The Swarmies then move the cubes to a white square in the center of the completion arena. The small, four-wheeled robots are designed to effectively and efficiently locate hidden resources while astronauts explore distant destinations such as the moon or Mars.
Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J; Wrede, Britta
2014-01-01
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction.
Vollmer, Anna-Lisa; Mühlig, Manuel; Steil, Jochen J.; Pitsch, Karola; Fritsch, Jannik; Rohlfing, Katharina J.; Wrede, Britta
2014-01-01
Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction. PMID:24646510
A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.
Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N
2015-01-01
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.
A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning
Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.
2015-01-01
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366
NASA Astrophysics Data System (ADS)
Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory
2005-05-01
Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.
Children perseverate to a human's actions but not to a robot's actions.
Moriguchi, Yusuke; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji
2010-01-01
Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one dimension (e.g. shape), after which they were asked to sort according to a different dimension (e.g. colour). The results showed that children's behaviours in the task were significantly influenced by the human model's actions but not by the robot's actions. Experiment 2 excluded the possibility that children's behaviours were not affected by the robot's actions because they did not observe its actions. We concluded that children's perseverative errors from social observation resulted, in part, from their socio-cognitive ability.
System and method for controlling a vision guided robot assembly
Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.
2017-03-07
A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.
Robotics in urologic oncology.
Jain, Saurabh; Gautam, Gagan
2015-01-01
Robotic surgery was initially developed to overcome problems faced during conventional laparoscopic surgeries and to perform telesurgery at distant locations. It has now established itself as the epitome of minimally invasive surgery (MIS). It is one of the most significant advances in MIS in recent years and is considered by many as a revolutionary technology, capable of influencing the future of surgery. After its introduction to urology, robotic surgery has redefined the management of urological malignancies. It promises to make difficult urological surgeries easier, safer and more acceptable to both the surgeon and the patient. Robotic surgery is slowly, but surely establishing itself in India. In this article, we provide an overview of the advantages, disadvantages, current status, and future applications of robotic surgery for urologic cancers in the context of the Indian scenario.
Everyday robotic action: lessons from human action control
de Kleijn, Roy; Kachergis, George; Hommel, Bernhard
2014-01-01
Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations. PMID:24672474
Renal surgery in the new millennium.
Delvecchio, F C; Preminger, G M
2000-11-01
In the not too distant future, the minimally invasive renal surgeon will be able to practice an operation on a difficult case on a three-dimensional virtual reality simulator, providing all attributes of the real procedure. The patient's imaging studies will be imported into the simulator to better mimic particular anatomy. When confident enough of his or her skills, the surgeon will start operating on the patient using the same virtual reality simulator/telepresence surgery console system, which will permit the live surgery to be conducted by robots hundreds of miles away. The robots will manipulate miniature endoscopes or control minimally or noninvasive ablative technologies. Endoscopic/laparoscopic footage of the surgical procedure will be stored digitally in optical disks to be used later in telementoring of a surgery resident. All this and more will be possible in the not so distant third millennium.
A teleconference with three-dimensional surgical video presentation on the 'usual' Internet.
Obuchi, Toshiro; Moroga, Toshihiko; Nakamura, Hiroshige; Shima, Hiroji; Iwasaki, Akinori
2015-03-01
Endoscopic surgery employing three-dimensional (3D) video images, such as a robotic surgery, has recently become common. However, the number of opportunities to watch such actual 3D videos is still limited due to many technical difficulties associated with showing 3D videos in front of an audience. A teleconference with 3D video presentations of robotic surgeries was held between our institution and a distant institution using a commercially available telecommunication appliance on the 'usual' Internet. Although purpose-built video displays and 3D glasses were necessary, no technical problems occurred during the presentation and discussion. This high-definition 3D telecommunication system can be applied to discussions about and education on 3D endoscopic surgeries for many surgeons, even in distant places, without difficulty over the usual Internet connection.
2017-04-20
In the Swarmathon competition at the Kennedy Space Center Visitor Complex, students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of cubes with AprilTags, similar to barcodes. Teams developed search algorithms for innovative robots known as "Swarmies" to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's second annual Swarmathon, 20 teams representing 22 minority serving universities and community colleges were invited to participate. Similar robots could help find resources when astronauts explore distant locations, such as the moon or Mars.
2006-06-01
Scientific Research. 5PAM-Crash is a trademark of the ESI Group . 6MATLAB and SIMULINK are registered trademarks of the MathWorks. 14 maneuvers...Laboratory (ARL) to develop methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots...methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots acting as a team to perform a
Learning robot actions based on self-organising language memory.
Wermter, Stefan; Elshaw, Mark
2003-01-01
In the MirrorBot project we examine perceptual processes using models of cortical assemblies and mirror neurons to explore the emergence of semantic representations of actions, percepts and concepts in a neural robot. The hypothesis under investigation is whether a neural model will produce a life-like perception system for actions. In this context we focus in this paper on how instructions for actions can be modeled in a self-organising memory. Current approaches for robot control often do not use language and ignore neural learning. However, our approach uses language instruction and draws from the concepts of regional distributed modularity, self-organisation and neural assemblies. We describe a self-organising model that clusters actions into different locations depending on the body part they are associated with. In particular, we use actual sensor readings from the MIRA robot to represent semantic features of the action verbs. Furthermore, we outline a hierarchical computational model for a self-organising robot action control system using language for instruction.
Space-time modeling using environmental constraints in a mobile robot system
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1990-01-01
Grid-based models of a robot's local environment have been used by many researchers building mobile robot control systems. The attraction of grid-based models is their clear parallel between the internal model and the external world. However, the discrete nature of such representations does not match well with the continuous nature of actions and usually serves to limit the abilities of the robot. This work describes a spatial modeling system that extracts information from a grid-based representation to form a symbolic representation of the robot's local environment. The approach makes a separation between the representation provided by the sensing system and the representation used by the action system. Separation allows asynchronous operation between sensing and action in a mobile robot, as well as the generation of a more continuous representation upon which to base actions.
2008-06-12
becoming a reality (Edwards 2005, 30). In theory , the new sensory systems would acquire electrical signatures emitting from distant communication...rationalization regarding the increased use of technology that may be employed during war. According to Hinman, The Ethics of Duty Theory and The...Utilitarianism Theory provide the theoretical framework that best describes how the current Law of War and philosophy of ethics define the virtue of
Children Perseverate to a Human's Actions but Not to a Robot's Actions
ERIC Educational Resources Information Center
Moriguchi, Yusuke; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji
2010-01-01
Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one…
New Opportunities for Outer Solar System Science using Radioisotope Electric Propulsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert J.; /SLAC; Amini, Rashied
Today, our questions and hypotheses about the Solar System's origin have surpassed our ability to deliver scientific instruments to deep space. The moons of the outer planets, the Trojan and Centaur minor planets, the trans-Neptunian objects (TNO), and distant Kuiper Belt objects (KBO) hold a wealth of information about the primordial conditions that led to the formation of our Solar System. Robotic missions to these objects are needed to make the discoveries, but the lack of deep-space propulsion is impeding this science. Radioisotope electric propulsion (REP) will revolutionize the way we do deep-space planetary science with robotic vehicles, giving themmore » unprecedented mobility. Radioisotope electric generators and lightweight ion thrusters are being developed today which will make possible REP systems with specific power in the range of 5 to 10 W/kg. Studies have shown that this specific power range is sufficient to perform fast rendezvous missions from Earth to the outer Solar System and fast sample return missions. This whitepaper discusses how mobility provided by REP opens up entirely new science opportunities for robotic missions to distant primitive bodies. We also give an overview of REP technology developments and the required next steps to realize REP.« less
Modeling Mixed Groups of Humans and Robots with Reflexive Game Theory
NASA Astrophysics Data System (ADS)
Tarasenko, Sergey
The Reflexive Game Theory is based on decision-making principles similar to the ones used by humans. This theory considers groups of subjects and allows to predict which action from the set each subject in the group will choose. It is possible to influence subject's decision in a way that he will make a particular choice. The purpose of this study is to illustrate how robots can refrain humans from risky actions. To determine the risky actions, the Asimov's Three Laws of robotics are employed. By fusing the RGT's power to convince humans on the mental level with Asimov's Laws' safety, we illustrate how robots in the mixed groups of humans and robots can influence on human subjects in order to refrain humans from risky actions. We suggest that this fusion has a potential to device human-like motor behaving and looking robots with the human-like decision-making algorithms.
A Policy Representation Using Weighted Multiple Normal Distribution
NASA Astrophysics Data System (ADS)
Kimura, Hajime; Aramaki, Takeshi; Kobayashi, Shigenobu
In this paper, we challenge to solve a reinforcement learning problem for a 5-linked ring robot within a real-time so that the real-robot can stand up to the trial and error. On this robot, incomplete perception problems are caused from noisy sensors and cheap position-control motor systems. This incomplete perception also causes varying optimum actions with the progress of the learning. To cope with this problem, we adopt an actor-critic method, and we propose a new hierarchical policy representation scheme, that consists of discrete action selection on the top level and continuous action selection on the low level of the hierarchy. The proposed hierarchical scheme accelerates learning on continuous action space, and it can pursue the optimum actions varying with the progress of learning on our robotics problem. This paper compares and discusses several learning algorithms through simulations, and demonstrates the proposed method showing application for the real robot.
NASA Technical Reports Server (NTRS)
Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.
1990-01-01
Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.
NASA Technical Reports Server (NTRS)
Watzin, James G.; Burt, Joseph; Tooley, Craig
2004-01-01
The Vision for Space Exploration calls for undertaking lunar exploration activities to enable sustained human and robotic exploration of Mars and beyond, including more distant destinations in the solar system. In support of this vision, the Robotic Lunar Exploration Program (RLEP) is expected to execute a series of robotic missions to the Moon, starting in 2008, in order to pave the way for further human space exploration. This paper will give an introduction to the RLEP program office, its role and its goals, and the approach it is taking to executing the charter of the program. The paper will also discuss candidate architectures that are being studied as a framework for defining the RLEP missions and the context in which they will evolve.
RoboJockey: Designing an Entertainment Experience with Robots.
Yoshida, Shigeo; Shirokura, Takumi; Sugiura, Yuta; Sakamoto, Daisuke; Ono, Tetsuo; Inami, Masahiko; Igarashi, Takeo
2016-01-01
The RoboJockey entertainment system consists of a multitouch tabletop interface for multiuser collaboration. RoboJockey enables a user to choreograph a mobile robot or a humanoid robot by using a simple visual language. With RoboJockey, a user can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements and coordinate the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. RoboJockey was demonstrated to the public during two pilot studies, and the authors observed users' behavior. Here, they report the results of their observations and discuss the RoboJockey entertainment experience.
Human-Robot Cooperation with Commands Embedded in Actions
NASA Astrophysics Data System (ADS)
Kobayashi, Kazuki; Yamada, Seiji
In this paper, we first propose a novel interaction model, CEA (Commands Embedded in Actions). It can explain the way how some existing systems reduce the work-load of their user. We next extend the CEA and build ECEA (Extended CEA) model. The ECEA enables robots to achieve more complicated tasks. On this extension, we employ ACS (Action Coding System) which can describe segmented human acts and clarifies the relationship between user's actions and robot's actions in a task. The ACS utilizes the CEA's strong point which enables a user to send a command to a robot by his/her natural action for the task. The instance of the ECEA led by using the ACS is a temporal extension which has the user keep a final state of a previous his/her action. We apply the temporal extension of the ECEA for a sweeping task. The high-level task, a cooperative task between the user and the robot can be realized. The robot with simple reactive behavior can sweep the region of under an object when the user picks up the object. In addition, we measure user's cognitive loads on the ECEA and a traditional method, DCM (Direct Commanding Method) in the sweeping task, and compare between them. The results show that the ECEA has a lower cognitive load than the DCM significantly.
Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony
2010-01-01
This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503
Affordance Equivalences in Robotics: A Formalism
Andries, Mihai; Chavez-Garcia, Ricardo Omar; Chatila, Raja; Giusti, Alessandro; Gambardella, Luca Maria
2018-01-01
Automatic knowledge grounding is still an open problem in cognitive robotics. Recent research in developmental robotics suggests that a robot's interaction with its environment is a valuable source for collecting such knowledge about the effects of robot's actions. A useful concept for this process is that of an affordance, defined as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. This paper proposes a formalism for defining and identifying affordance equivalence. By comparing the elements of two affordances, we can identify equivalences between affordances, and thus acquire grounded knowledge for the robot. This is useful when changes occur in the set of actions or objects available to the robot, allowing to find alternative paths to reach goals. In the experimental validation phase we verify if the recorded interaction data is coherent with the identified affordance equivalences. This is done by querying a Bayesian Network that serves as container for the collected interaction data, and verifying that both affordances considered equivalent yield the same effect with a high probability. PMID:29937724
2018-04-17
Students from Montgomery College in Rockville in Maryland, follow the progress of their Swarmie robots during the Swarmathon competition at the Kennedy Space Center Visitor Complex. Students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of AprilTag cubes, similar to barcodes. Teams developed search algorithms for the Swarmies to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's third annual Swarmathon, 23 teams represented 24 minority serving universities and community colleges were invited to develop software code to operate these innovative robots known as "Swarmies" to help find resources when astronauts explore distant locations, such as the Moon or Mars.
Robot learning and error correction
NASA Technical Reports Server (NTRS)
Friedman, L.
1977-01-01
A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.
How to Build an Intentional Android: Infants' Imitation of a Robot's Goal-Directed Actions
ERIC Educational Resources Information Center
Itakura, Shoji; Ishida, Hiraku; Kanda, Takayuki; Shimada, Yohko; Ishiguro, Hiroshi; Lee, Kang
2008-01-01
This study examined whether young children are able to imitate a robot's goal-directed actions. Children (24-35 months old) viewed videos showing a robot attempting to manipulate an object (e.g., putting beads inside a cup) but failing to achieve its goal (e.g., beads fell outside the cup). In 1 video, the robot made eye contact with a human…
Controlling mechanisms over the internet
NASA Astrophysics Data System (ADS)
Lumia, Ronald
1997-01-01
The internet, widely available throughout the world, can be used to control robots, machine tools, and other mechanisms. This paper will describe a low-cost virtual collaborative environment (VCE) which will connect users with distant equipment. The system is based on PC technology, and incorporates off-line-programming with on-line execution. A remote user programs the systems graphically and simulates the motions and actions of the mechanism until satisfied with the functionality of the program. The program is then transferred from the remote site to the local site where the real equipment exists. At the local site, the simulation is run again to check the program from a safety standpoint. Then, the local user runs the program on the real equipment. During execution, a camera in the real workspace provides an image back to the remote user through a teleconferencing system. The system costs approximately 12,500 dollars and represents a low-cost alternative to the Sandia National Laboratories VCE.
Mendez, Ivar; Jong, Michael; Keays-White, Debra; Turner, Gail
2013-01-01
Objective To evaluate the feasibility of remote presence for improving the health of residents in a remote northern Inuit community. Study design A pilot study assessed patient's, nurse's and physician's satisfaction with and the use of the remote presence technology aiding delivery of health care to a remote community. A preliminary cost analysis of this technology was also performed. Methods This study deployed a remote presence RP-7 robot to the isolated Inuit community of Nain, Newfoundland and Labrador for 15 months. The RP-7 is wirelessly controlled by a laptop computer equipped with audiovisual capability and a joystick to maneuver the robot in real time to aid in the assessing and care of patients from a distant location. Qualitative data on physician's, patient's, caregiver's and staff's satisfaction were collected as well as information on its use and characteristics and the number of air transports required to the referral center and associated costs. Results A total of 252 remote presence sessions occurred during the study period, with 89% of the sessions involving direct patient assessment or monitoring. Air transport was required in only 40% of the cases that would have been otherwise transported normally. Patients and their caregivers, nurses and physicians all expressed a high level of satisfaction with the remote presence technology and deemed it beneficial for improved patient care, workloads and job satisfaction. Conclusions These results show the feasibility of deploying a remote presence robot in a distant northern community and a high degree of satisfaction with the technology. Remote presence in the Canadian North has potential for delivering a cost-effective health care solution to underserviced communities reducing the need for the transport of patients and caregivers to distant referral centers. PMID:23984292
Mendez, Ivar; Jong, Michael; Keays-White, Debra; Turner, Gail
2013-01-01
To evaluate the feasibility of remote presence for improving the health of residents in a remote northern Inuit community. A pilot study assessed patient's, nurse's and physician's satisfaction with and the use of the remote presence technology aiding delivery of health care to a remote community. A preliminary cost analysis of this technology was also performed. This study deployed a remote presence RP-7 robot to the isolated Inuit community of Nain, Newfoundland and Labrador for 15 months. The RP-7 is wirelessly controlled by a laptop computer equipped with audiovisual capability and a joystick to maneuver the robot in real time to aid in the assessing and care of patients from a distant location. Qualitative data on physician's, patient's, caregiver's and staff's satisfaction were collected as well as information on its use and characteristics and the number of air transports required to the referral center and associated costs. A total of 252 remote presence sessions occurred during the study period, with 89% of the sessions involving direct patient assessment or monitoring. Air transport was required in only 40% of the cases that would have been otherwise transported normally. Patients and their caregivers, nurses and physicians all expressed a high level of satisfaction with the remote presence technology and deemed it beneficial for improved patient care, workloads and job satisfaction. These results show the feasibility of deploying a remote presence robot in a distant northern community and a high degree of satisfaction with the technology. Remote presence in the Canadian North has potential for delivering a cost-effective health care solution to underserviced communities reducing the need for the transport of patients and caregivers to distant referral centers.
When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions
ERIC Educational Resources Information Center
Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman
2012-01-01
In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…
NASA Technical Reports Server (NTRS)
Morring, Frank, Jr.
2005-01-01
Engineers and interns at this NASA field center are building the prototype of a robotic rover that could go where no wheeled rover has gone before-into the dark cold craters at the lunar poles and across the Moon s rugged highlands-like a walking tetrahedron. With NASA pushing to meet President Bush's new exploration objectives, the robots taking shape here today could be on the Moon in a decade. In the longer term, the concept could lead to shape-shifting robot swarms designed to explore distant planetary surfaces in advance of humans. "If you look at all of NASA s projections of the future, anyone s projections of the space program, they re all rigid-body architecture," says Steven Curtis, principal investigator on the effort. "This is not rigid-body. The whole key here is flexibility and reconfigurability with a capital R."
A neural network-based exploratory learning and motor planning system for co-robots
Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640
A neural network-based exploratory learning and motor planning system for co-robots.
Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano
2015-01-01
Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.
Integrated Planning for Telepresence With Time Delays
NASA Technical Reports Server (NTRS)
Johnston, Mark; Rabe, Kenneth
2009-01-01
A conceptual "intelligent assistant" and an artificial-intelligence computer program that implements the intelligent assistant have been developed to improve control exerted by a human supervisor over a robot that is so distant that communication between the human and the robot involves significant signal-propagation delays. The goal of the effort is not only to help the human supervisor monitor and control the state of the robot, but also to improve the efficiency of the robot by allowing the supervisor to "work ahead". The intelligent assistant is an integrated combination of an artificial-intelligence planner and a monitor of states of both the human supervisor and the remote robot. The novelty of the system lies in the way it uses the planner to reason about the states at both ends of the time delay. The purpose served by the assistant is to provide advice to the human supervisor about current and future activities, derived from a sequence of high-level goals to be achieved.
Types of verbal interaction with instructable robots
NASA Technical Reports Server (NTRS)
Crangle, C.; Suppes, P.; Michalowski, S.
1987-01-01
An instructable robot is one that accepts instruction in some natural language such as English and uses that instruction to extend its basic repertoire of actions. Such robots are quite different in conception from autonomously intelligent robots, which provide the impetus for much of the research on inference and planning in artificial intelligence. Examined here are the significant problem areas in the design of robots that learn from vebal instruction. Examples are drawn primarily from our earlier work on instructable robots and recent work on the Robotic Aid for the physically disabled. Natural-language understanding by machines is discussed as well as in the possibilities and limits of verbal instruction. The core problem of verbal instruction, namely, how to achieve specific concrete action in the robot in response to commands that express general intentions, is considered, as are two major challenges to instructability: achieving appropriate real-time behavior in the robot, and extending the robot's language capabilities.
Action and language integration: from humans to cognitive robots.
Borghi, Anna M; Cangelosi, Angelo
2014-07-01
The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.
Evaluation of parallel reduction strategies for fusion of sensory information from a robot team
NASA Astrophysics Data System (ADS)
Lyons, Damian M.; Leroy, Joseph
2015-05-01
The advantage of using a team of robots to search or to map an area is that by navigating the robots to different parts of the area, searching or mapping can be completed more quickly. A crucial aspect of the problem is the combination, or fusion, of data from team members to generate an integrated model of the search/mapping area. In prior work we looked at the issue of removing mutual robots views from an integrated point cloud model built from laser and stereo sensors, leading to a cleaner and more accurate model. This paper addresses a further challenge: Even with mutual views removed, the stereo data from a team of robots can quickly swamp a WiFi connection. This paper proposes and evaluates a communication and fusion approach based on the parallel reduction operation, where data is combined in a series of steps of increasing subsets of the team. Eight different strategies for selecting the subsets are evaluated for bandwidth requirements using three robot missions, each carried out with teams of four Pioneer 3-AT robots. Our results indicate that selecting groups to combine based on similar pose but distant location yields the best results.
Deactivation in the Sensorimotor Area during Observation of a Human Agent Performing Robotic Actions
ERIC Educational Resources Information Center
Shimada, Sotaro
2010-01-01
It is well established that several motor areas, called the mirror-neuron system (MNS), are activated when an individual observes other's actions. However, whether the MNS responds similarly to robotic actions compared with human actions is still controversial. The present study investigated whether and how the motor area activity is influenced by…
EEG theta and Mu oscillations during perception of human and robot actions
Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.
2013-01-01
The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375
EEG theta and Mu oscillations during perception of human and robot actions.
Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P
2013-01-01
The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.
Object Transportation by Two Mobile Robots with Hand Carts
Hara, Tatsunori
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50–60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499
Object Transportation by Two Mobile Robots with Hand Carts.
Sakuyama, Takuya; Figueroa Heredia, Jorge David; Ogata, Taiki; Hara, Tatsunori; Ota, Jun
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50-60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement.
Toolkits Control Motion of Complex Robotics
NASA Technical Reports Server (NTRS)
2010-01-01
That space is a hazardous environment for humans is common knowledge. Even beyond the obvious lack of air and gravity, the extreme temperatures and exposure to radiation make the human exploration of space a complicated and risky endeavor. The conditions of space and the space suits required to conduct extravehicular activities add layers of difficulty and danger even to tasks that would be simple on Earth (tightening a bolt, for example). For these reasons, the ability to scout distant celestial bodies and perform maintenance and construction in space without direct human involvement offers significant appeal. NASA has repeatedly turned to complex robotics for solutions to extend human presence deep into space at reduced risk and cost and to enhance space operations in low Earth orbit. At Johnson Space Center, engineers explore the potential applications of dexterous robots capable of performing tasks like those of an astronaut during extravehicular activities and even additional ones too delicate or dangerous for human participation. Johnson's Dexterous Robotics Laboratory experiments with a wide spectrum of robot manipulators, such as the Mitsubishi PA-10 and the Robotics Research K-1207i robotic arms. To simplify and enhance the use of these robotic systems, Johnson researchers sought generic control methods that could work effectively across every system.
Language for action: Motor resonance during the processing of human and robotic voices.
Di Cesare, G; Errante, A; Marchi, M; Cuccio, V
2017-11-01
In this fMRI study we evaluated whether the auditory processing of action verbs pronounced by a human or a robotic voice in the imperative mood differently modulates the activation of the mirror neuron system (MNs). The study produced three results. First, the activation pattern found during listening to action verbs was very similar in both the robot and human conditions. Second, the processing of action verbs compared to abstract verbs determined the activation of the fronto-parietal circuit classically involved during the action goal understanding. Third, and most importantly, listening to action verbs compared to abstract verbs produced activation of the anterior part of the supramarginal gyrus (aSMG) regardless of the condition (human and robot) and in the absence of any object name. The supramarginal gyrus is a region considered to underpin hand-object interaction and associated to the processing of affordances. These results suggest that listening to action verbs may trigger the recruitment of motor representations characterizing affordances and action execution, coherently with the predictive nature of motor simulation that not only allows us to re-enact motor knowledge to understand others' actions but also prepares us for the actions we might need to carry out. Copyright © 2017 Elsevier Inc. All rights reserved.
Robot Acquisition of Active Maps Through Teleoperation and Vector Space Analysis
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2003-01-01
The work performed under this contract was in the area of intelligent robotics. The problem being studied was the acquisition of intelligent behaviors by a robot. The method was to acquire action maps that describe tasks as sequences of reflexive behaviors. Action maps (a.k.a. topological maps) are graphs whose nodes represent sensorimotor states and whose edges represent the motor actions that cause the robot to proceed from one state to the next. The maps were acquired by the robot after being teleoperated or otherwise guided by a person through a task several times. During a guided task, the robot records all its sensorimotor signals. The signals from several task trials are partitioned into episodes of static behavior. The corresponding episodes from each trial are averaged to produce a task description as a sequence of characteristic episodes. The sensorimotor states that indicate episode boundaries become the nodes, and the static behaviors, the edges. It was demonstrated that if compound maps are constructed from a set of tasks then the robot can perform new tasks in which it was never explicitly trained.
Dynamic photogrammetric calibration of industrial robots
NASA Astrophysics Data System (ADS)
Maas, Hans-Gerd
1997-07-01
Today's developments in industrial robots focus on aims like gain of flexibility, improvement of the interaction between robots and reduction of down-times. A very important method to achieve these goals are off-line programming techniques. In contrast to conventional teach-in-robot programming techniques, where sequences of actions are defined step-by- step via remote control on the real object, off-line programming techniques design complete robot (inter-)action programs in a CAD/CAM environment. This poses high requirements to the geometric accuracy of a robot. While the repeatability of robot poses in the teach-in mode is often better than 0.1 mm, the absolute pose accuracy potential of industrial robots is usually much worse due to tolerances, eccentricities, elasticities, play, wear-out, load, temperature and insufficient knowledge of model parameters for the transformation from poses into robot axis angles. This fact necessitates robot calibration techniques, including the formulation of a robot model describing kinematics and dynamics of the robot, and a measurement technique to provide reference data. Digital photogrammetry as an accurate, economic technique with realtime potential offers itself for this purpose. The paper analyzes the requirements posed to a measurement technique by industrial robot calibration tasks. After an overview on measurement techniques used for robot calibration purposes in the past, a photogrammetric robot calibration system based on off-the- shelf lowcost hardware components will be shown and results of pilot studies will be discussed. Besides aspects of accuracy, reliability and self-calibration in a fully automatic dynamic photogrammetric system, realtime capabilities are discussed. In the pilot studies, standard deviations of 0.05 - 0.25 mm in the three coordinate directions could be achieved over a robot work range of 1.7 X 1.5 X 1.0 m3. The realtime capabilities of the technique allow to go beyond kinematic robot calibration and perform dynamic robot calibration as well as photogrammetric on-line control of a robot in action.
Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo
2011-01-01
This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Ososky, Scott; Sanders, Tracy; Jentsch, Florian; Hancock, Peter; Chen, Jessie Y. C.
2014-06-01
Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans' trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system's action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans' impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human-robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human-robot communication and human mental models regarding robots impact a human's ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.
Schack, Thomas; Ritter, Helge
2009-01-01
This paper examines the cognitive architecture of human action, showing how it is organized over several levels and how it is built up. Basic action concepts (BACs) are identified as major building blocks on a representation level. These BACs are cognitive tools for mastering the functional demands of movement tasks. Results from different lines of research showed that not only the structure formation of mental representations in long-term memory but also chunk formation in working memory are built up on BACs and relate systematically to movement structures. It is concluded that such movement representations might provide the basis for action implementation and action control in skilled voluntary movements in the form of cognitive reference structures. To simulate action implementation we discuss challenges and issues that arise when we try to replicate complex movement abilities in robots. Among the key issues to be addressed is the question how structured representations can arise during skill acquisition and how the underlying processes can be understood sufficiently succinctly to replicate them on robot platforms. Working towards this goal, we translate our findings in studies of motor control in humans into models that can guide the implementation of cognitive robot architectures. Focusing on the issue of manual action control, we illustrate some results in the context of grasping with a five-fingered anthropomorphic robot hand.
2017-04-19
In the Swarmathon competition at the Kennedy Space Center Visitor Complex, students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of AprilTag cubes, similar to barcodes. Teams developed search algorithms for the Swarmies to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's second annual Swarmathon, 20 teams representing 22 minority serving universities and community colleges were invited to develop software code to operate these innovative robots known as "Swarmies" to help find resources when astronauts explore distant locations, such as the moon or Mars.
2018-04-18
In the Swarmathon competition at the Kennedy Space Center Visitor Complex, students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of AprilTag cubes, similar to barcodes. Teams developed search algorithms for the Swarmies to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's third annual Swarmathon, 23 teams represented 24 minority serving universities and community colleges were invited to develop software code to operate these innovative robots known as "Swarmies" to help find resources when astronauts explore distant locations, such as the Moon or Mars.
2018-04-17
In the Swarmathon competition at the Kennedy Space Center Visitor Complex, students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of AprilTag cubes, similar to barcodes. Teams developed search algorithms for the Swarmies to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's third annual Swarmathon, 23 teams represented 24 minority serving universities and community colleges were invited to develop software code to operate these innovative robots known as "Swarmies" to help find resources when astronauts explore distant locations, such as the Moon or Mars.
On the development of a reactive sensor-based robotic system
NASA Technical Reports Server (NTRS)
Hexmoor, Henry H.; Underwood, William E., Jr.
1989-01-01
Flexible robotic systems for space applications need to use local information to guide their action in uncertain environments where the state of the environment and even the goals may change. They have to be tolerant of unexpected events and robust enough to carry their task to completion. Tactical goals should be modified while maintaining strategic goals. Furthermore, reactive robotic systems need to have a broader view of their environments than sensory-based systems. An architecture and a theory of representation extending the basic cycles of action and perception are described. This scheme allows for dynamic description of the environment and determining purposive and timely action. Applications of this scheme for assembly and repair tasks using a Universal Machine Intelligence RTX robot are being explored, but the ideas are extendable to other domains. The nature of reactivity for sensor-based robotic systems and implementation issues encountered in developing a prototype are discussed.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.
Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford
2014-01-01
One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.
Introducing Robotics at the Undergraduate Level.
ERIC Educational Resources Information Center
Thangiah, Sam R.; Joshi, Sharad W.
1997-01-01
Outlines how a course in robotics can be taught at the undergraduate level with specific experiments that can be used for incremental learning in programming a mobile robot or by simulating the actions of a robot. Contains 14 references. (Author/ASK)
Investigating the ability to read others' intentions using humanoid robots.
Sciutti, Alessandra; Ansuini, Caterina; Becchio, Cristina; Sandini, Giulio
2015-01-01
The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and exhibiting contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks.
Perception-action map learning in controlled multiscroll systems applied to robot navigation.
Arena, Paolo; De Fiore, Sebastiano; Fortuna, Luigi; Patané, Luca
2008-12-01
In this paper a new technique for action-oriented perception in robots is presented. The paper starts from exploiting the successful implementation of the basic idea that perceptual states can be embedded into chaotic attractors whose dynamical evolution can be associated with sensorial stimuli. In this way, it can be possible to encode, into the chaotic dynamics, environment-dependent patterns. These have to be suitably linked to an action, executed by the robot, to fulfill an assigned mission. This task is addressed here: the action-oriented perception loop is closed by introducing a simple unsupervised learning stage, implemented via a bio-inspired structure based on the motor map paradigm. In this way, perceptual meanings, useful for solving a given task, can be autonomously learned, based on the environment-dependent patterns embedded into the controlled chaotic dynamics. The presented framework has been tested on a simulated robot and the performance have been successfully compared with other traditional navigation control paradigms. Moreover an implementation of the proposed architecture on a Field Programmable Gate Array is briefly outlined and preliminary experimental results on a roving robot are also reported.
Economics of fabricating plastic preforms by robotics
NASA Astrophysics Data System (ADS)
Lundgren, E. M.
1985-08-01
A robotic work cell consisting of a process robot, an automatic weigh feeder, and an existing plastic pill making machine was developed. This work cell was released September 13, 1983, for production use. Although the work cell was designed and planned for operation in an operator-unattended mode, renovation and rearrangement of the work area made it necessary to assemble the work cell in the Robot Application Center Annex and to implement its initial use in production as an operator-attended work cell. Because the work cell is located in an area distant from the normal work area, an operator cannot monitor this and other equipment conveniently. As of September 1, 1984, the plastic pill making robot work cell has produced 80,428 pills in 752.8 hours, a reduction of 683.4 hours from the 1436.2 hours manual operation would have required. The next step in the development of automated pill making will occur when the work cell is relocated into the production department with a new pill press. Projections for future savings of $20,866 annually are based on a reduction of 1448 labor hours.
NASA Center for Intelligent Robotic Systems for Space Exploration
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE.
A tele-operated mobile ultrasound scanner using a light-weight robot.
Delgorge, Cécile; Courrèges, Fabien; Al Bassit, Lama; Novales, Cyril; Rosenberger, Christophe; Smith-Guerin, Natalie; Brù, Concepció; Gilabert, Rosa; Vannoni, Maurizio; Poisson, Gérard; Vieyres, Pierre
2005-03-01
This paper presents a new tele-operated robotic chain for real-time ultrasound image acquisition and medical diagnosis. This system has been developed in the frame of the Mobile Tele-Echography Using an Ultralight Robot European Project. A light-weight six degrees-of-freedom serial robot, with a remote center of motion, has been specially designed for this application. It holds and moves a real probe on a distant patient according to the expert gesture and permits an image acquisition using a standard ultrasound device. The combination of mechanical structure choice for the robot and dedicated control law, particularly nearby the singular configuration allows a good path following and a robotized gesture accuracy. The choice of compression techniques for image transmission enables a compromise between flow and quality. These combined approaches, for robotics and image processing, enable the medical specialist to better control the remote ultrasound probe holder system and to receive stable and good quality ultrasound images to make a diagnosis via any type of communication link from terrestrial to satellite. Clinical tests have been performed since April 2003. They used both satellite or Integrated Services Digital Network lines with a theoretical bandwidth of 384 Kb/s. They showed the tele-echography system helped to identify 66% of lesions and 83% of symptomatic pathologies.
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
Guarded Motion for Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Idaho National Laboratory (INL) has created codes that ensure that a robot will come to a stop at a precise, specified distance from any obstacle regardless of the robot's initial speed, its physical characteristics, and the responsiveness of the low-level motor control schema. This Guarded Motion for Mobile Robots system iteratively adjusts the robot's action in response to information about the robot's environment.
ROBOSIM: An intelligent simulator for robotic systems
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Biegl, Csaba; Springfield, James F.
1993-01-01
The purpose of this paper is to present an update of an intelligent robotics simulator package, ROBOSIM, first introduced at Technology 2000 in 1990. ROBOSIM is used for three-dimensional geometrical modeling of robot manipulators and various objects in their workspace, and for the simulation of action sequences performed by the manipulators. Geometric modeling of robot manipulators has an expanding area of interest because it can aid the design and usage of robots in a number of ways, including: design and testing of manipulators, robot action planning, on-line control of robot manipulators, telerobotic user interface, and training and education. NASA developed ROBOSIM between 1985-88 to facilitate the development of robotics, and used the package to develop robotics for welding, coating, and space operations. ROBOSIM has been further developed for academic use by its co-developer Vanderbilt University, and has been in both classroom and laboratory environments for teaching complex robotic concepts. Plans are being formulated to make ROBOSIM available to all U.S. engineering/engineering technology schools (over three hundred total with an estimated 10,000+ users per year).
Computer assisted surgery with 3D robot models and visualisation of the telesurgical action.
Rovetta, A
2000-01-01
This paper deals with the support of virtual reality computer action in the procedures of surgical robotics. Computer support gives a direct representation of the surgical theatre. The modelization of the procedure in course and in development gives a psychological reaction towards safety and reliability. Robots similar to the ones used by the manufacturing industry can be used with little modification as very effective surgical tools. They have high precision, repeatability and are versatile in integrating with the medical instrumentation. Now integrated surgical rooms, with computer and robot-assisted intervention, are operating. The computer is the element for a decision taking aid, and the robot works as a very effective tool.
Robotic situational awareness of actions in human teaming
NASA Astrophysics Data System (ADS)
Tahmoush, Dave
2015-06-01
When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Using a cognitive architecture for general purpose service robot control
NASA Astrophysics Data System (ADS)
Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo
2015-04-01
A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.
Inner rehearsal modeling for cognitive robotics
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Bergen, Karianne; Dasey, Timothy J.
2011-05-01
This paper presents a biomimetic approach involving cognitive process modeling, for use in intelligent robot decisionmaking. The principle of inner rehearsal, a process believed to occur in human and animal cognition, involves internal rehearsing of actions prior to deciding on and executing an overt action, such as a motor action. The inner-rehearsal algorithmic approach we developed is posed and investigated in the context of a relatively complex cognitive task, an under-rubble search and rescue. The paper presents the approach developed, a synthetic environment which was also developed to enable its studies, and the results to date. The work reported here is part of a Cognitive Robotics effort in which we are currently engaged, focused on exploring techniques inspired by cognitive science and neuroscience insights, towards artificial cognition for robotics and autonomous systems.
Della Mea, V; Cataldi, P; Pertoldi, B; Beltrami, C A
2000-01-01
The aim of this paper is to describe the experiments carried out to evaluate the diagnostic efficacy of a dynamic-robotic telepathology system for the delivery of pathology services to distant hospitals. The system provides static/dynamic features and the remote control of a robotized microscope over 4 ISDN lines. For evaluation purposes, 184 consecutive cases of frozen sections (60), gastrointestinal pathology (64), and urinary cytology (60) have been diagnosed at a distance using the system, and the telediagnosis obtained in this way has been compared with the traditional microscopic diagnosis. Diagnostic agreement ranged from 90% in urinary cytology to 100% in frozen sections. The results obtained suggest that such a system can be considered a useful tool for supporting the pathology practice in isolated hospitals.
Embodied cognition for autonomous interactive robots.
Hoffman, Guy
2012-10-01
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.
2017-04-19
A sign at the Kennedy Space Center Visitor Complex announces the second annual Swarmathon competition. Students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of cubes with AprilTags, similar to barcodes. Teams developed search algorithms for the Swarmies to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's second annual Swarmathon, 20 teams representing 22 minority serving universities and community colleges were invited to develop software code to operate these innovative robots known as "Swarmies" to help find resources when astronauts explore distant locations, such as the moon or Mars.
2017-04-19
A display at the Kennedy Space Center Visitor Complex describes the purpose of Swarmies. Computer scientists are developing these robots focusing not so much on the hardware, but the software. In the spaceport's annual Swarmathon, students from 12 colleges and universities across the nation were invited to develop software code to operate Swarmies to help find resources when astronauts explore distant planets, such as Mars.
Rare Neural Correlations Implement Robotic Conditioning with Delayed Rewards and Disturbances
Soltoggio, Andrea; Lemme, Andre; Reinhart, Felix; Steil, Jochen J.
2013-01-01
Neural conditioning associates cues and actions with following rewards. The environments in which robots operate, however, are pervaded by a variety of disturbing stimuli and uncertain timing. In particular, variable reward delays make it difficult to reconstruct which previous actions are responsible for following rewards. Such an uncertainty is handled by biological neural networks, but represents a challenge for computational models, suggesting the lack of a satisfactory theory for robotic neural conditioning. The present study demonstrates the use of rare neural correlations in making correct associations between rewards and previous cues or actions. Rare correlations are functional in selecting sparse synapses to be eligible for later weight updates if a reward occurs. The repetition of this process singles out the associating and reward-triggering pathways, and thereby copes with distal rewards. The neural network displays macro-level classical and operant conditioning, which is demonstrated in an interactive real-life human-robot interaction. The proposed mechanism models realistic conditioning in humans and animals and implements similar behaviors in neuro-robotic platforms. PMID:23565092
NASA Astrophysics Data System (ADS)
Narayan Ray, Dip; Majumder, Somajyoti
2014-07-01
Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.
Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco
2018-05-01
Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.
An Integrated Framework for Human-Robot Collaborative Manipulation.
Sheng, Weihua; Thobbi, Anand; Gu, Ye
2015-10-01
This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.
Planning actions in robot automated operations
NASA Technical Reports Server (NTRS)
Das, A.
1988-01-01
Action planning in robot automated operations requires intelligent task level programming. Invoking intelligence necessiates a typical blackboard based architecture, where, a plan is a vector between the start frame and the goal frame. This vector is composed of partially ordered bases. A partial ordering of bases presents good and bad sides in action planning. Partial ordering demands the use of a temporal data base management system.
78 FR 49296 - Centennial Challenges 2014 Sample Return Robot Challenge
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-13
... Return Robot Challenge AGENCY: National Aeronautics and Space Administration (NASA). ACTION: Notice of Centennial Challenges 2014 Sample Return Robot Challenge. SUMMARY: This notice is issued in accordance with 51 U.S.C. 20144(c). The 2014 Sample Return Robot Challenge is scheduled and teams that wish to...
76 FR 56819 - Centennial Challenges 2012 Sample Return Robot Challenge
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
... Return Robot Challenge AGENCY: National Aeronautics and Space Administration (NASA). ACTION: Notice. SUMMARY: This notice is issued in accordance with 42 U.S.C. 2451(314)(d). The 2012 Sample Return Robot.... The 2012 Sample Return Robot Challenge is a prize competition designed to encourage development of new...
Learning reliable manipulation strategies without initial physical models
NASA Technical Reports Server (NTRS)
Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.
1990-01-01
A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.
JacksonBot - Design, Simulation and Optimal Control of an Action Painting Robot
NASA Astrophysics Data System (ADS)
Raschke, Michael; Mombaur, Katja; Schubert, Alexander
We present the robotics platform JacksonBot which is capable to produce paintings inspired by the Action Painting style of Jackson Pollock. A dynamically moving robot arm splashes color from a container at the end effector on the canvas. The paintings produced by this platform rely on a combination of the algorithmic generation of robot arm motions with random effects of the splashing color. The robot can be considered as a complex and powerful tool to generate art works programmed by a user. Desired end effector motions can be prescribed either by mathematical functions, by point sequences or by data glove motions. We have evaluated the effect of different shapes of input motions on the resulting painting. In order to compute the robot joint trajectories necessary to move along a desired end effector path, we use an optimal control based approach to solve the inverse kinematics problem.
Interactive robot control system and method of use
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)
2012-01-01
A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.
ShouldeRO, an alignment-free two-DOF rehabilitation robot for the shoulder complex.
Dehez, Bruno; Sapin, Julien
2011-01-01
This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes. © 2011 IEEE
Towards multi-platform software architecture for Collaborative Teleoperation
NASA Astrophysics Data System (ADS)
Domingues, Christophe; Otmane, Samir; Davesne, Frederic; Mallem, Malik
2009-03-01
Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.
Project InterActions: A Multigenerational Robotic Learning Environment
NASA Astrophysics Data System (ADS)
Bers, Marina U.
2007-12-01
This paper presents Project InterActions, a series of 5-week workshops in which very young learners (4- to 7-year-old children) and their parents come together to build and program a personally meaningful robotic project in the context of a multigenerational robotics-based community of practice. The goal of these family workshops is to teach both parents and children about the mechanical and programming aspects involved in robotics, as well as to initiate them in a learning trajectory with and about technology. Results from this project address different ways in which parents and children learn together and provide insights into how to develop educational interventions that would educate parents, as well as children, in new domains of knowledge and skills such as robotics and new technologies.
Robot Manipulations: A Synergy of Visualization, Computation and Action for Spatial Instruction
ERIC Educational Resources Information Center
Verner, Igor M.
2004-01-01
This article considers the use of a learning environment, RoboCell, where manipulations of objects are performed by robot operations specified through the learner's application of mathematical and spatial reasoning. A curriculum is proposed relating to robot kinematics and point-to-point motion, rotation of objects, and robotic assembly of spatial…
Using expectations to monitor robotic progress and recover from problems
NASA Astrophysics Data System (ADS)
Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial
2013-05-01
How does a robot know when something goes wrong? Our research answers this question by leveraging expectations - predictions about the immediate future - and using the mismatch between the expectations and the external world to monitor the robot's progress. We use the cognitive architecture ACT-R (Adaptive Control of Thought - Rational) to learn the associations between the current state of the robot and the world, the action to be performed in the world, and the future state of the world. These associations are used to generate expectations that are then matched by the architecture with the next state of the world. A significant mismatch between these expectations and the actual state of the world indicate a problem possibly resulting from unexpected consequences of the robot's actions, unforeseen changes in the environment or unanticipated actions of other agents. When a problem is detected, the recovery model can suggest a number of recovery options. If the situation is unknown, that is, the mismatch between expectations and the world is novel, the robot can use a recovery solution from a set of heuristic options. When a recovery option is successfully applied, the robot learns to associate that recovery option with the mismatch. When the same problem is encountered later, the robot can apply the learned recovery solution rather than using the heuristics or randomly exploring the space of recovery solutions. We present results from execution monitoring and recovery performed during an assessment conducted at the Combined Arms Collective Training Facility (CACTF) at Fort Indiantown Gap.
Taniguchi, Akira; Taniguchi, Tadahiro; Cangelosi, Angelo
2017-01-01
In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. PMID:29311888
2018-04-18
In the Swarmathon competition at the Kennedy Space Center Visitor Complex, students were asked to develop computer code for the small robots, programming them to look for "resources" in the form of AprilTag cubes, similar to barcodes. To add to the challenge, obstacles in the form of simulated rocks were placed in the completion arena. Teams developed search algorithms for the Swarmies to operate autonomously, communicating and interacting as a collective swarm similar to ants foraging for food. In the spaceport's third annual Swarmathon, 23 teams represented 24 minority serving universities and community colleges were invited to develop software code to operate these innovative robots known as "Swarmies" to help find resources when astronauts explore distant locations, such as the Moon or Mars.
Heuristic control of the Utah/MIT dextrous robot hand
NASA Technical Reports Server (NTRS)
Bass, Andrew H., Jr.
1987-01-01
Basic hand grips and sensor interactions that a dextrous robot hand will need as part of the operation of an EVA Retriever are analyzed. What is to be done with a dextrous robot hand is examined along with how such a complex machine might be controlled. It was assumed throughout that an anthropomorphic robot hand should perform tasks just as a human would; i.e., the most efficient approach to developing control strategies for the hand would be to model actual hand actions and do the same tasks in the same ways. Therefore, basic hand grips that human hands perform, as well as hand grip action were analyzed. It was also important to examine what is termed sensor fusion. This is the integration of various disparate sensor feedback paths. These feedback paths can be spatially and temporally separated, as well as, of different sensor types. Neural networks are seen as a means of integrating these varied sensor inputs and types. Basic heuristics of hand actions and grips were developed. These heuristics offer promise of control dextrous robot hands in a more natural and efficient way.
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2017-01-01
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system. PMID:29311891
Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions.
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2017-01-01
An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as "not," "and," and "or" simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human-robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as "true," "false," and "not" work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word "and," which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word "or," which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.
Interacting With Robots to Investigate the Bases of Social Interaction.
Sciutti, Alessandra; Sandini, Giulio
2017-12-01
Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.
Observation and imitation of actions performed by humans, androids, and robots: an EMG study
Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.
2015-01-01
Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782
Robots that can adapt like animals.
Cully, Antoine; Clune, Jeff; Tarapore, Danesh; Mouret, Jean-Baptiste
2015-05-28
Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot 'think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.
Robots that can adapt like animals
NASA Astrophysics Data System (ADS)
Cully, Antoine; Clune, Jeff; Tarapore, Danesh; Mouret, Jean-Baptiste
2015-05-01
Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot `think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315
Billè, Andrea; Sachidananda, Sandeep; Moreira, Andre L; Rizk, Nabil P
2017-02-01
In advanced stages, thymic tumors tend to spread locally. Distant metastatic disease is rare. We present the first report of single metastatic abdominal lymph node in a 37-year-old female patient and 5 years after an extrapleural pneumonectomy for stage IV thymoma followed by radiotherapy with no other evidence of abdominal disease successfully treated by robotic surgical resection.
Efficient Symbolic Task Planning for Multiple Mobile Robots
2016-12-13
Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of
Integration of a computerized two-finger gripper for robot workstation safety
NASA Technical Reports Server (NTRS)
Sneckenberger, John E.; Yoshikata, Kazuki
1988-01-01
A microprocessor-based controller has been developed that continuously monitors and adjusts the gripping force applied by a special two-finger gripper. This computerized force sensing gripper system enables the endeffector gripping action to be independently detected and corrected. The gripping force applied to a manipulated object is real-time monitored for problem situations, situations which can occur during both planned and errant robot arm manipulation. When unspecified force conditions occur at the gripper, the gripping force controller initiates specific reactions to cause dynamic corrections to the continuously variable gripping action. The force controller for this intelligent gripper has been interfaced to the controller of an industrial robot. The gripper and robot controllers communicate to accomplish the successful completion of normal gripper operations as well as unexpected hazardous situations. An example of an unexpected gripping condition would be the sudden deformation of the object being manipulated by the robot. The capabilities of the interfaced gripper-robot system to apply workstation safety measures (e.g., stop the robot) when these unexpected gripping effects occur have been assessed.
Coordinating a Team of Robots for Urban Reconnaisance
2010-11-01
Land Warfare Conference 2010 Brisbane November 2010 Coordinating a Team of Robots for Urban Reconnaisance Pradeep Ranganathan , Ryan...without inundating him with micro- management . Behavorial autonomy is also critical for the human operator to productively interact Figure 1: A...today’s systems, a human operator controls a single robot, micro- managing every action. This micro- management becomes impossible with more robots: in
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
NASA Astrophysics Data System (ADS)
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-02-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.
Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars
Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed
2016-01-01
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862
Toward a practical mobile robotic aid system for people with severe physical disabilities.
Regalbuto, M A; Krouskop, T A; Cheatham, J B
1992-01-01
A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.
ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
1995-02-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less
Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank
2017-12-14
Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.
Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J
2008-05-01
Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.
Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection.
Novak, Domen; Riener, Robert
2013-06-01
Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.
2006-07-01
mobility in complex terrain, robot system designers are still seeking workable processes for mapbuilding, with enduring problems that either require...human) robot system designers /users can seek to control the consequences of robot actions, deliberate or otherwise. A notable particular application...operators a sufficient feeling of presence; if not, robot system designers will have to provide autonomy to the robot to make up for the gaps in human input
Damholdt, Malene F.; Nørskov, Marco; Yamazaki, Ryuji; Hakli, Raul; Hansen, Catharina Vesterager; Vestergaard, Christina; Seibt, Johanna
2015-01-01
Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed. PMID:26635646
Damholdt, Malene F; Nørskov, Marco; Yamazaki, Ryuji; Hakli, Raul; Hansen, Catharina Vesterager; Vestergaard, Christina; Seibt, Johanna
2015-01-01
Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
NASA Astrophysics Data System (ADS)
Noble, R. J.; Sykes, M. V.
The scientific activities undertaken to explore our Solar System will be very similar to those required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution, as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of the technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.
Small Body Exploration Technologies as Precursors for Interstellar Robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert; /SLAC; Sykes, Mark V.
The scientific activities undertaken to explore our Solar System will be the same as required someday at other stars. The systematic exploration of primitive small bodies throughout our Solar System requires new technologies for autonomous robotic spacecraft. These diverse celestial bodies contain clues to the early stages of the Solar System's evolution as well as information about the origin and transport of water-rich and organic material, the essential building blocks for life. They will be among the first objects studied at distant star systems. The technologies developed to address small body and outer planet exploration will form much of themore » technical basis for designing interstellar robotic explorers. The Small Bodies Assessment Group, which reports to NASA, initiated a Technology Forum in 2011 that brought together scientists and technologists to discuss the needs and opportunities for small body robotic exploration in the Solar System. Presentations and discussions occurred in the areas of mission and spacecraft design, electric power, propulsion, avionics, communications, autonomous navigation, remote sensing and surface instruments, sampling, intelligent event recognition, and command and sequencing software. In this paper, the major technology themes from the Technology Forum are reviewed, and suggestions are made for developments that will have the largest impact on realizing autonomous robotic vehicles capable of exploring other star systems.« less
Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations
NASA Technical Reports Server (NTRS)
Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana
2011-01-01
We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.
Forming Human-Robot Teams Across Time and Space
NASA Technical Reports Server (NTRS)
Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.
2012-01-01
NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot, so it involves both internal fault detection as well as force sensing for unintended external contacts. The designs for the supervisory command mode and the redundant safety system will be described. Specific implementations were developed and test results will be reported. Experiments were conducted using terrestrial analogs for deep space missions, where time delays were artificially added to emulate the longer distances found in space.
Judging near and distant virtue and vice ☆
Eyal, Tal; Liberman, Nira; Trope, Yaacov
2009-01-01
We propose that people judge immoral acts as more offensive and moral acts as more virtuous when the acts are psychologically distant than near. This is because people construe more distant situations in terms of moral principles, rather than attenuating situation-specific considerations. Results of four studies support these predictions. Study 1 shows that more temporally distant transgressions (e.g., eating one's dead dog) are construed in terms of moral principles rather than contextual information. Studies 2 and 3 further show that morally offensive actions are judged more severely when imagined from a more distant temporal (Study 2) or social (Study 3) perspective. Finally, Study 4 shows that moral acts (e.g., adopting a disabled child) are judged more positively from temporal distance. The findings suggest that people more readily apply their moral principles to distant rather than proximal behaviors. PMID:19554217
The 3D model control of image processing
NASA Technical Reports Server (NTRS)
Nguyen, An H.; Stark, Lawrence
1989-01-01
Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.
Ferris, Daniel P
2009-06-09
It is a fantastic time for the field of robotic exoskeletons. Recent advances in actuators, sensors, materials, batteries, and computer processors have given new hope to creating the exoskeletons of yesteryear's science fiction. While the most common goal of an exoskeleton is to provide superhuman strength or endurance, scientists and engineers around the world are building exoskeletons with a wide range of diverse purposes. Exoskeletons can help patients with neurological disabilities improve their motor performance by providing task specific practice. Exoskeletons can help physiologists better understand how the human body works by providing a novel experimental perturbation. Exoskeletons can even help power mobile phones, music players, and other portable electronic devices by siphoning mechanical work performed during human locomotion. This special thematic series on robotic lower limb exoskeletons and orthoses includes eight papers presenting novel contributions to the field. The collective message of the papers is that robotic exoskeletons will contribute in many ways to the future benefit of humankind, and that future is not that distant.
Quantum robots and environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-08-01
Quantum robots and their interactions with environments of quantum systems are described, and their study justified. A quantum robot is a mobile quantum system that includes an on-board quantum computer and needed ancillary systems. Quantum robots carry out tasks whose goals include specified changes in the state of the environment, or carrying out measurements on the environment. Each task is a sequence of alternating computation and action phases. Computation phase activites include determination of the action to be carried out in the next phase, and recording of information on neighborhood environmental system states. Action phase activities include motion of themore » quantum robot and changes in the neighborhood environment system states. Models of quantum robots and their interactions with environments are described using discrete space and time. A unitary step operator T that gives the single time step dynamics is associated with each task. T=T{sub a}+T{sub c} is a sum of action phase and computation phase step operators. Conditions that T{sub a} and T{sub c} should satisfy are given along with a description of the evolution as a sum over paths of completed phase input and output states. A simple example of a task{emdash}carrying out a measurement on a very simple environment{emdash}is analyzed in detail. A decision tree for the task is presented and discussed in terms of the sums over phase paths. It is seen that no definite times or durations are associated with the phase steps in the tree, and that the tree describes the successive phase steps in each path in the sum over phase paths. {copyright} {ital 1998} {ital The American Physical Society}« less
Affect in Human-Robot Interaction
2014-01-01
is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation
76 FR 67716 - Notice of Intent To Grant Partially Exclusive Patent License; ReconRobotics, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-02
... DEPARTMENT OF DEFENSE Department of the Navy Notice of Intent To Grant Partially Exclusive Patent License; ReconRobotics, Inc. AGENCY: Department of the Navy, DoD. ACTION: Notice. SUMMARY: The Department of the Navy hereby gives notice of its intent to grant to ReconRobotics, Inc., a revocable...
ReACT!: An Interactive Educational Tool for AI Planning for Robotics
ERIC Educational Resources Information Center
Dogmus, Zeynep; Erdem, Esra; Patogulu, Volkan
2015-01-01
This paper presents ReAct!, an interactive educational tool for artificial intelligence (AI) planning for robotics. ReAct! enables students to describe robots' actions and change in dynamic domains without first having to know about the syntactic and semantic details of the underlying formalism, and to solve planning problems using…
Robotics Algorithms Provide Nutritional Guidelines
NASA Technical Reports Server (NTRS)
2009-01-01
On July 5, 1997, a small robot emerged from its lander like an insect from an egg, crawling out onto the rocky surface of Mars. About the size of a child s wagon, NASA s Sojourner robot was the first successful rover mission to the Red Planet. For 83 sols (Martian days, typically about 40 minutes longer than Earth days), Sojourner - largely remote controlled by NASA operators on Earth - transmitted photos and data unlike any previously collected. Sojourner was perhaps the crowning achievement of the NASA Space Telerobotics Program, an Agency initiative designed to push the limits of robotics in space. Telerobotics - devices that merge the autonomy of robotics with the direct human control of teleoperators - was already a part of NASA s efforts; probes like the Viking landers that preceded Sojourner on Mars, for example, were telerobotic applications. The Space Telerobotics Program, a collaboration between Ames Research Center, Johnson Space Center, Jet Propulsion Laboratory (JPL), and multiple universities, focused on developing remote-controlled robotics for three main purposes: on-orbit assembly and servicing, science payload tending, and planetary surface robotics. The overarching goal was to create robots that could be guided to build structures in space, monitor scientific experiments, and, like Sojourner, scout distant planets in advance of human explorers. While telerobotics remains a significant aspect of NASA s efforts, as evidenced by the currently operating Spirit and Opportunity Mars rovers, the Hubble Space Telescope, and many others - the Space Telerobotics Program was dissolved and redistributed within the Agency the same year as Sojourner s success. The program produced a host of remarkable technologies and surprising inspirations, including one that is changing the way people eat
Simulation-based intelligent robotic agent for Space Station Freedom
NASA Technical Reports Server (NTRS)
Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.
1990-01-01
A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.
Positional control of space robot manipulator
NASA Astrophysics Data System (ADS)
Kurochkin, Vladislav; Shymanchuk, Dzmitry
2018-05-01
In this article the mathematical model of a planar space robot manipulator is under study. The space robot manipulator represents a solid body with attached manipulators. The system of equations of motion is determined using the Lagrange's equations. The control problem concerning moving the robot to a given point and return it to a given trajectory in the phase space is solved. Changes of generalized coordinates and necessary control actions are plotted for a specific model.
... Action Featured Research Robert Webster Vanderbilt University Flexible Robot for Pituitary Tumor Removal Steven Hetts University of ... Georgia Institute of Technology Minimally Invasive Neurosurgical Intracranial Robot David Kaplan Tufts University Silk Screws Duncan Maitland , ...
2012-12-04
CAPE CANAVERAL, Fla. – At the Kennedy Space Center Visitor Complex in Florida sixth-grade students view a mock-up of a robotic device that could one day be sent to a distant planet. Between Nov. 26 and Dec. 7, 2012, about 5,300 sixth-graders in Brevard County, Florida were bused to Kennedy's Visitor Complex for Brevard Space Week, an educational program designed to encourage interest in science, technology, engineering and mathematics STEM careers. Photo credit: NASA/Tim Jacobs
Electroencephalography(EEG)-based instinctive brain-control of a quadruped locomotion robot.
Jia, Wenchuan; Huang, Dandan; Luo, Xin; Pu, Huayan; Chen, Xuedong; Bai, Ou
2012-01-01
Artificial intelligence and bionic control have been applied in electroencephalography (EEG)-based robot system, to execute complex brain-control task. Nevertheless, due to technical limitations of the EEG decoding, the brain-computer interface (BCI) protocol is often complex, and the mapping between the EEG signal and the practical instructions lack of logic associated, which restrict the user's actual use. This paper presents a strategy that can be used to control a quadruped locomotion robot by user's instinctive action, based on five kinds of movement related neurophysiological signal. In actual use, the user drives or imagines the limbs/wrists action to generate EEG signal to adjust the real movement of the robot according to his/her own motor reflex of the robot locomotion. This method is easy for real use, as the user generates the brain-control signal through the instinctive reaction. By adopting the behavioral control of learning and evolution based on the proposed strategy, complex movement task may be realized by instinctive brain-control.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
ERIC Educational Resources Information Center
Cangelosi, Angelo; Riga, Thomas
2006-01-01
The grounding of symbols in computational models of linguistic abilities is one of the fundamental properties of psychologically plausible cognitive models. In this article, we present an embodied model for the grounding of language in action based on epigenetic robots. Epigenetic robotics is one of the new cognitive modeling approaches to…
Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition
Lallee, Stephane; Madden, Carol; Hoen, Michel; Dominey, Peter Ford
2010-01-01
The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robot's ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action. PMID:20577629
Robotics and telecommunication systems to provide better access to ultrasound expertise in the OR.
Angelini, L; Papaspyropoulos, V
2000-01-01
Surgery has begun to evolve as a result of the intense use of technological innovations. The result of this is better services for patients and enormous opportunities for the producers of biomedical instruments. The surgeon and the technologist are fast becoming allies in applying the latest developments of robotics, image treatment, simulation, sensors and telecommunications to surgery, in particular to the emerging field of minimally-invasive surgery. Ultrasonography is at present utilised both for diagnostic and therapeutic purposes in various fields. Intraoperative US examination can be of primary importance, especially when dealing with space-occupying lesions. The widening use of minimally-invasive surgery has furthered the development of US for use during this type of surgery. The success of a US examination requires not only a correct execution of the procedure, but also a correct interpretation of the images. We describe two projects that combine robotics and telecommunication systems to provide better access to US expertise in the operating room. The Midstep project has as its object the realisation of two robotic arms, one for the distant control of the US probe during laparoscopic surgery and the second to perform tele-interventional US. The second project, part of the Strategic CNR Project-'Robotics in Surgery', involves the realisation of a common platform for tracking and targeting surgical instruments in video-assisted surgery.
[Short-term efficacy of da Vinci robotic surgical system on rectal cancer in 101 patients].
Zeng, Dong-Zhu; Shi, Yan; Lei, Xiao; Tang, Bo; Hao, Ying-Xue; Luo, Hua-Xing; Lan, Yuan-Zhi; Yu, Pei-Wu
2013-05-01
To investigate the feasibility and safety of da Vinci robotic surgical system in rectal cancer radical operation, and to summarize its short-term efficacy and clinical experience. Data of 101 cases undergoing da Vinci robotic surgical system for rectal cancer radical operation from March 2010 to September 2012 were retrospectively analyzed. Evaluation was focused on operative procedure, complication, recovery and pathology. All the 101 cases underwent operation successfully and safely without conversion to open procedure. Rectal cancer radical operation with da Vinci robotic surgical system included 73 low anterior resections and 28 abdominoperineal resections. The average operative time was (210.3±47.2) min. The average blood lose was (60.5±28.7) ml without transfusion. Lymphadenectomy harvest was 17.3±5.4. Passage of first flatus was (2.7±0.7) d. Distal margin was (5.3±2.3) cm without residual cancer cells. The complication rate was 6.9%, including anastomotic leakage(n=2), perineum incision infection(n=2), pulmonary infection (n=2), urinary retention (n=1). There was no postoperative death. The mean follow-up time was(12.9±8.0) months. No local recurrence was found except 2 cases with distant metastasis. Application of da Vinci robotic surgical system in rectal cancer radical operation is safe and patients recover quickly The short-term efficacy is satisfactory.
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia
2012-06-01
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Levy, Sharona T.; Mioduser, David
2008-01-01
This study investigates young children's perspectives in explaining a self-regulating mobile robot, as they learn to program its behaviors from rules. We explore their descriptions of a robot in action to determine the nature of their explanatory frameworks: psychological or technological. We have also studied the role of an adult's intervention…
Quantum robots plus environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-07-23
A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less
Improving Grasp Skills Using Schema Structured Learning
NASA Technical Reports Server (NTRS)
Platt, Robert; Grupen, ROderic A.; Fagg, Andrew H.
2006-01-01
Abstract In the control-based approach to robotics, complex behavior is created by sequencing and combining control primitives. While it is desirable for the robot to autonomously learn the correct control sequence, searching through the large number of potential solutions can be time consuming. This paper constrains this search to variations of a generalized solution encoded in a framework known as an action schema. A new algorithm, SCHEMA STRUCTURED LEARNING, is proposed that repeatedly executes variations of the generalized solution in search of instantiations that satisfy action schema objectives. This approach is tested in a grasping task where Dexter, the UMass humanoid robot, learns which reaching and grasping controllers maximize the probability of grasp success.
NASA Astrophysics Data System (ADS)
Simione, Luca; Nolfi, Stefano
2014-10-01
In this paper we illustrate how the capacity to select the most appropriate actions when handling contexts affording multiple conflicting actions can be solved either through a selective attention strategy (in which the stimuli affording alternative actions are filtered out at the perceptual level through top-down regulation) or at later processing stages through an action selection strategy (through the suppression of the premotor information eliciting alternative actions). By carrying out a series of experiments in which a neuro-robot develops an ability to choose between conflicting actions, we were able to identify the conditions that lead to the development of solutions based on one strategy or another. Overall, the results indicate that the selective attention strategy constitutes the most simple and straightforward mechanism enabling the acquisition of such capacities. Moreover, the characteristics of the adaptive/learning process influence whether the adaptive robot converges towards a selective attention and/or action selection strategy.
NASA's asteroid redirect mission: Robotic boulder capture option
NASA Astrophysics Data System (ADS)
Abell, P.; Nuth, J.; Mazanek, D.; Merrill, R.; Reeves, D.; Naasz, B.
2014-07-01
NASA is examining two options for the Asteroid Redirect Mission (ARM), which will return asteroid material to a Lunar Distant Retrograde Orbit (LDRO) using a robotic solar-electric-propulsion spacecraft, called the Asteroid Redirect Vehicle (ARV). Once the ARV places the asteroid material into the LDRO, a piloted mission will rendezvous and dock with the ARV. After docking, astronauts will conduct two extravehicular activities (EVAs) to inspect and sample the asteroid material before returning to Earth. One option involves capturing an entire small (˜4--10 m diameter) near-Earth asteroid (NEA) inside a large inflatable bag. However, NASA is also examining another option that entails retrieving a boulder (˜1--5 m) via robotic manipulators from the surface of a larger (˜100+ m) pre-characterized NEA. The Robotic Boulder Capture (RBC) option can leverage robotic mission data to help ensure success by targeting previously (or soon to be) well-characterized NEAs. For example, the data from the Japan Aerospace Exploration Agency's (JAXA) Hayabusa mission has been utilized to develop detailed mission designs that assess options and risks associated with proximity and surface operations. Hayabusa's target NEA, Itokawa, has been identified as a valid target and is known to possess hundreds of appropriately sized boulders on its surface. Further robotic characterization of additional NEAs (e.g., Bennu and 1999 JU_3) by NASA's OSIRIS REx and JAXA's Hayabusa 2 missions is planned to begin in 2018. This ARM option reduces mission risk and provides increased benefits for science, human exploration, resource utilization, and planetary defense.
Konofaos, Petros; Hammond, Sarah; Ver Halen, Jon P; Samant, Sandeep
2013-02-01
Although the use of transoral robotic surgery for tumor extirpation is expanding, little is known about national trends in the reconstruction of resultant defects. An 18-question electronic survey was created by an expert panel of surgeons from the Department of Otolaryngology-Head and Neck Surgery and the Department of Plastic and Reconstructive Surgery at the University of Tennessee. Eligible participants were identified by the American Head and Neck Society Web site and from the Intuitive Surgical, Inc., Web site after review of surgeons trained in transoral robotic surgery techniques. Twenty-three of 27 preselected head and neck surgeons (85.18 percent) completed the survey. All respondents use transoral robotic surgery for head and neck tumor extirpation. The majority of the respondents [n = 17 (77.3 percent)] did not use any means of reconstruction. With respect to methods of reconstruction following transoral robotic surgery defects, the majority [n = 4 (80.0 percent)] used a free flap, a pedicled local flap [n = 3 (60.0 percent)], or a distant flap [n = 3 (60.0 percent)]. The radial forearm flap was the most commonly used free flap by all respondents. In general, the majority of survey respondents allow defects to heal secondarily or close primarily. Based on this survey, consensus indications for pedicled or free tissue transfer following transoral robotic surgery defects were primary head and neck tumors (stage T3 and T4a), pharyngeal defects with exposure of vital structures, and prior irradiation or chemoradiation to the operative site and neck.
Information-Driven Active Audio-Visual Source Localization
Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph
2015-01-01
We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619
Equipment and technology in surgical robotics.
Sim, Hong Gee; Yip, Sidney Kam Hung; Cheng, Christopher Wai Sam
2006-06-01
Contemporary medical robotic systems used in urologic surgery usually consist of a computer and a mechanical device to carry out the designated task with an image acquisition module. These systems are typically from one of the two categories: offline or online robots. Offline robots, also known as fixed path robots, are completely automated with pre-programmed motion planning based on pre-operative imaging studies where precise movements within set confines are carried out. Online robotic systems rely on continuous input from the surgeons and change their movements and actions according to the input in real time. This class of robots is further divided into endoscopic manipulators and master-slave robotic systems. Current robotic surgical systems have resulted in a paradigm shift in the minimally invasive approach to complex laparoscopic urological procedures. Future developments will focus on refining haptic feedback, system miniaturization and improved augmented reality and telesurgical capabilities.
The Power of Educational Robotics
NASA Astrophysics Data System (ADS)
Cummings, Timothy
The purpose of this action research project was to investigate the impact a students' participation in educational robotics has on his or her performance in the STEM subjects. This study attempted to utilize educational robotics as a method for increasing student achievement and engagement in STEM subjects. Over the course of 12 weeks, an after-school robotics program was offered to students. Guided by the standards and principles of VEX IQ, a leading resource in educational robotics, students worked in collaboration on creating a design for their robot, building and testing their robot, and competing in the VEX IQ Crossover Challenge. Student data was gathered through a pre-participation survey, observations from the work they performed in robotics club, their performance in STEM subject classes, and the analysis of their end-of-the-year report card. Results suggest that the students who participate in robotics club experienced a positive impact on their performance in STEM subject classes.
Fronto-parietal coding of goal-directed actions performed by artificial agents.
Kupferberg, Aleksandra; Iacoboni, Marco; Flanagin, Virginia; Huber, Markus; Kasparbauer, Anna; Baumgartner, Thomas; Hasler, Gregor; Schmidt, Florian; Borst, Christoph; Glasauer, Stefan
2018-03-01
With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment. © 2017 Wiley Periodicals, Inc.
Little Dog learning of tractive and compressive terrain characteristics
NASA Astrophysics Data System (ADS)
Digney, Bruce L.
2011-05-01
In recent years research into legged locomotion across extreme terrains has increased. Much of this work was done under the DARPA Learning Legged Locomotion program that utilized a standard Little Dog robot platform and prepared terrain test boards with known geometric data. While path planing using geometric information is necessary, acquiring and utilizing tractive and compressive terrain characteristics is equally important. This paper describes methods and results for learning tractive and compressive terrain characteristics with the Little Dog robot. The estimation of terrain traction and compressive/support capabilities using the mechanisms and movements of the robot rather than dedicated instruments is the goal of this research. The resulting characteristics may differ from those of standard tests, however they will be directly usable to the locomotion controllers given that they are obtained in the physical context of the actual robot and its actual movements. This paper elaborates on the methods used and presents results. Future work will develop better suited probabilistic models and interwave these methods with other purposeful actions of the robot to lessen the need for direct terrain probing actions.
Reinforcement learning of periodical gaits in locomotion robots
NASA Astrophysics Data System (ADS)
Svinin, Mikhail; Yamada, Kazuyaki; Ushio, S.; Ueda, Kanji
1999-08-01
Emergence of stable gaits in locomotion robots is studied in this paper. A classifier system, implementing an instance- based reinforcement learning scheme, is used for sensory- motor control of an eight-legged mobile robot. Important feature of the classifier system is its ability to work with the continuous sensor space. The robot does not have a prior knowledge of the environment, its own internal model, and the goal coordinates. It is only assumed that the robot can acquire stable gaits by learning how to reach a light source. During the learning process the control system, is self-organized by reinforcement signals. Reaching the light source defines a global reward. Forward motion gets a local reward, while stepping back and falling down get a local punishment. Feasibility of the proposed self-organized system is tested under simulation and experiment. The control actions are specified at the leg level. It is shown that, as learning progresses, the number of the action rules in the classifier systems is stabilized to a certain level, corresponding to the acquired gait patterns.
NASA Astrophysics Data System (ADS)
Konolige, Kurt G.; Gutmann, Steffen; Guzzoni, Didier; Ficklin, Robert W.; Nicewarner, Keith E.
1999-08-01
Mobile robot hardware and software is developing to the point where interesting applications for groups of such robots can be contemplated. We envision a set of mobots acting to map and perform surveillance or other task within an indoor environment (the Sense Net). A typical application of the Sense Net would be to detect survivors in buildings damaged by earthquake or other disaster, where human searchers would be put a risk. As a team, the Sense Net could reconnoiter a set of buildings faster, more reliably, and more comprehensibly than an individual mobot. The team, for example, could dynamically form subteams to perform task that cannot be done by individual robots, such as measuring the range to a distant object by forming a long baseline stereo sensor form a pari of mobots. In addition, the team could automatically reconfigure itself to handle contingencies such as disabled mobots. This paper is a report of our current progress in developing the Sense Net, after the first year of a two-year project. In our approach, each mobot has sufficient autonomy to perform several tasks, such as mapping unknown areas, navigating to specific positions, and detecting, tracking, characterizing, and classifying human and vehicular activity. We detail how some of these tasks are accomplished, and how the mobot group is tasked.
Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.
Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi
2018-03-26
For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.
HAZBOT - A hazardous materials emergency response mobile robot
NASA Technical Reports Server (NTRS)
Stone, H. W.; Edmonds, G.
1992-01-01
The authors describe the progress that has been made towards the development of a mobile robot that can be used by hazardous materials emergency response teams to perform a variety of tasks including incident localization and characterization, hazardous material identification/classification, site surveillance and monitoring, and ultimately incident mitigation. In September of 1991, the HAZBOT II vehicle performed its first end-to-end demonstration involving a scenario in which the vehicle: navigated to the incident location from a distant (150-200 ft.) deployment site; entered a building through a door with thumb latch style handle and door closer; located and navigated to the suspected incident location (a chemical storeroom); unlocked and opened the storeroom's door; climbed over the storeroom's 12 in. high threshold to enter the storeroom; and located and identified a broken container of benzene.
HAZBOT - A hazardous materials emergency response mobile robot
NASA Astrophysics Data System (ADS)
Stone, H. W.; Edmonds, G.
The authors describe the progress that has been made towards the development of a mobile robot that can be used by hazardous materials emergency response teams to perform a variety of tasks including incident localization and characterization, hazardous material identification/classification, site surveillance and monitoring, and ultimately incident mitigation. In September of 1991, the HAZBOT II vehicle performed its first end-to-end demonstration involving a scenario in which the vehicle: navigated to the incident location from a distant (150-200 ft.) deployment site; entered a building through a door with thumb latch style handle and door closer; located and navigated to the suspected incident location (a chemical storeroom); unlocked and opened the storeroom's door; climbed over the storeroom's 12 in. high threshold to enter the storeroom; and located and identified a broken container of benzene.
Some aspects of robotics calibration, design and control
NASA Technical Reports Server (NTRS)
Tawfik, Hazem
1990-01-01
The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
Usability test of KNRC self-feeding robot.
Song, Won-Kyung; Song, Won-Jin; Kim, Yale; Kim, Jongbae
2013-06-01
Various assistive robots for supporting the activities of daily living have been developed. However, not many of these have been introduced into the market because they were found to be impractical in actual scenarios. In this paper, we report on the usability test results of an assistive robot designed for self-feeding for people having disabilities, which includes those having spinal cord injury, cerebral palsy, and traumatic brain injury. First, we present three versions of a novel self-feeding robot (KNRC self-feeding robot), which is suitable for use with Korean food, including sticky rice. These robots have been improved based on participatory action design over a period of three years. Next, we discuss the usability tests of the KNRC self-feeding robots. People with disabilities participated in comparative tests between the KNRC self-feeding robot and the commercialized product named My Spoon. The KNRC self-feeding robot showed positive results in relation to satisfaction and performance compared to the commercialized robot when users ate Korean food, including sticky rice.
An implementation of sensor-based force feedback in a compact laparoscopic surgery robot.
Lee, Duk-Hee; Choi, Jaesoon; Park, Jun-Woo; Bach, Du-Jin; Song, Seung-Jun; Kim, Yoon-Ho; Jo, Yungho; Sun, Kyung
2009-01-01
Despite the rapid progress in the clinical application of laparoscopic surgery robots, many shortcomings have not yet been fully overcome, one of which is the lack of reliable haptic feedback. This study implemented a force-feedback structure in our compact laparoscopic surgery robot. The surgery robot is a master-slave configuration robot with 5 DOF (degree of freedom corresponding laparoscopic surgical motion. The force-feedback implementation was made in the robot with torque sensors and controllers installed in the pitch joint of the master and slave robots. A simple dynamic model of action-reaction force in the slave robot was used, through which the reflective force was estimated and fed back to the master robot. The results showed the system model could be identified with significant fidelity and the force feedback at the master robot was feasible. However, the qualitative human assessment of the fed-back force showed only limited level of object discrimination ability. Further developments are underway with this result as a framework.
Kimmig, Rainer; Wimberger, Pauline; Buderath, Paul; Aktas, Bahriye; Iannaccone, Antonella; Heubner, Martin
2013-08-26
Radical hysterectomy has been developed as a standard treatment in Stage I and II cervical cancers with and without adjuvant therapy. However, there have been several attempts to standardize the technique of radical hysterectomy required for different tumor extension with variable success. Total mesometrial resection as ontogenetic compartment-based oncologic surgery - developed by open surgery - can be standardized identically for all patients with locally defined tumors. It appears to be promising for patients in terms of radicalness as well as complication rates. Robotic surgery may additionally reduce morbidity compared to open surgery. We describe robotically assisted total mesometrial resection (rTMMR) step by step in cervical cancer and present feasibility data from 26 patients. Patients (n = 26) with the diagnosis of cervical cancer were included. Patients were treated by robotic total mesometrial resection (rTMMR) and pelvic or pelvic/periaortic robotic therapeutic lymphadenectomy (rtLNE) for FIGO stage IA-IIB cervical cancer. No transition to open surgery was necessary. No intraoperative complications were noted. The postoperative complication rate was 23%. Within follow-up time (mean: 18 months) we noted one distant but no locoregional recurrence of cervical cancer. There were no deaths from cervical cancer during the observation period. We conclude that rTMMR and rtLNE is a feasible and safe technique for the treatment of compartment-defined cervical cancer.
Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2
2015-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface
Telepresence and Intervention Robotics
2000-11-01
Sagittarius concept [Fon.95]. Sagittarius Concept A mythical creature illustrates the << quasi corporal ubiquity >>: the Sagittarius . It...and the gripping actions as the exact replica of what can be done by the front part of an antic creature: a Sagittarius (Figure 1). 2 Presence in many...places at the same time 20-4 -- * Human Sagittarius Robot Figure 1. Morphologic Equivalence Between a Human, a Sagittarius and a Robot It is
Distributed multirobot sensing and tracking: a behavior-based approach
NASA Astrophysics Data System (ADS)
Parker, Lynne E.
1995-09-01
An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.
Status of DoD Robotic Programs
1985-03-01
planning or adhere to previously planned routes. 0 Control. Controls are micro electronics based which provide means of autonomous action directly...KEY No: I 11 1181 1431 OROJECT Titloi ISMART TERRAIN ANALYSIS FOR ROBOTIC SYSTEMS (STARS) PROJECT Not I I CLASSIFICATION: IUCI TASK Titles IAUTOMATIC
Experientally guided robots. [for planet exploration
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1974-01-01
This paper argues that an experientally guided robot is necessary to successfully explore far-away planets. Such a robot is characterized as having sense organs which receive sensory information from its environment and motor systems which allow it to interact with that environment. The sensori-motor information which it receives is organized into an experiential knowledge structure and this knowledge in turn is used to guide the robot's future actions. A summary is presented of a problem solving system which is being used as a test bed for developing such a robot. The robot currently engages in the behaviors of visual tracking, focusing down, and looking around in a simulated Martian landscape. Finally, some unsolved problems are outlined whose solutions are necessary before an experientally guided robot can be produced. These problems center around organizing the motivational and memory structure of the robot and understanding its high-level control mechanisms.
Electroactive polymer (EAP) actuators for future humanlike robots
NASA Astrophysics Data System (ADS)
Bar-Cohen, Yoseph
2009-03-01
Human-like robots are increasingly becoming an engineering reality thanks to recent technology advances. These robots, which are inspired greatly by science fiction, were originated from the desire to reproduce the human appearance, functions and intelligence and they may become our household appliance or even companion. The development of such robots is greatly supported by emerging biologically inspired technologies. Potentially, electroactive polymer (EAP) materials are offering actuation capabilities that allow emulating the action of our natural muscles for making such machines perform lifelike. There are many technical issues related to making such robots including the need for EAP materials that can operate as effective actuators. Beside the technology challenges these robots also raise concerns that need to be addressed prior to forming super capable robots. These include the need to prevent accidents, deliberate harm, or their use in crimes. In this paper, the potential EAP actuators and the challenges that these robots may pose will be reviewed.
Electroactive Polymer (EAP) Actuators for Future Humanlike Robots
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph
2009-01-01
Human-like robots are increasingly becoming an engineering reality thanks to recent technology advances. These robots, which are inspired greatly by science fiction, were originated from the desire to reproduce the human appearance, functions and intelligence and they may become our household appliance or even companion. The development of such robots is greatly supported by emerging biologically inspired technologies. Potentially, electroactive polymer (EAP) materials are offering actuation capabilities that allow emulating the action of our natural muscles for making such machines perform lifelike. There are many technical issues related to making such robots including the need for EAP materials that can operate as effective actuators. Beside the technology challenges these robots also raise concerns that need to be addressed prior to forming super capable robots. These include the need to prevent accidents, deliberate harm, or their use in crimes. In this paper, the potential EAP actuators and the challenges that these robots may pose will be reviewed.
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
Grounding the Meanings in Sensorimotor Behavior using Reinforcement Learning
Farkaš, Igor; Malík, Tomáš; Rebrová, Kristína
2012-01-01
The recent outburst of interest in cognitive developmental robotics is fueled by the ambition to propose ecologically plausible mechanisms of how, among other things, a learning agent/robot could ground linguistic meanings in its sensorimotor behavior. Along this stream, we propose a model that allows the simulated iCub robot to learn the meanings of actions (point, touch, and push) oriented toward objects in robot’s peripersonal space. In our experiments, the iCub learns to execute motor actions and comment on them. Architecturally, the model is composed of three neural-network-based modules that are trained in different ways. The first module, a two-layer perceptron, is trained by back-propagation to attend to the target position in the visual scene, given the low-level visual information and the feature-based target information. The second module, having the form of an actor-critic architecture, is the most distinguishing part of our model, and is trained by a continuous version of reinforcement learning to execute actions as sequences, based on a linguistic command. The third module, an echo-state network, is trained to provide the linguistic description of the executed actions. The trained model generalizes well in case of novel action-target combinations with randomized initial arm positions. It can also promptly adapt its behavior if the action/target suddenly changes during motor execution. PMID:22393319
Tsukamoto, Taiji; Tanaka, Shigeru
2015-08-01
We conducted a questionnaire survey of hospitals with robot-assisted surgical equipment to study changes of the surgical case loads after its installation and the managerial strategies for its purchase. The study included 154 hospitals (as of April 2014) that were queried about their radical prostatectomy case loads from January 2009 to December 2013, strategies for installation of the equipment in their hospitals, and other topics related to the study purpose. The overall response rate of hospitals was 63%, though it marginally varied according to type and area. The annual case load was determined based on the results of the questionnaire and other modalities. It increased from 3,518 in 2009 to 6,425 in 2013. The case load seemed to be concentrated in hospitals with robot equipment since the increase of their number was very minimal over the 5 years. The hospitals with the robot treated a larger number of newly diagnosed patients with the disease than before. Most of the patients were those having localized cancer that was indicated for radical surgery, suggesting again the concentration of the surgical case loads in the hospitals with robots. While most hospitals believed that installation of a robot was necessary as an option for treatment procedures, the future strategy of the hospital, and other reasons, the action of the hospital to gain prestige may be involved in the process of purchasing the equipment. In conclusion, robot-assisted laparoscopic radical prostatectomy has become popular as a surgical procedure for prostate cancer in our society. This may lead to a concentration of the surgical case load in a limited number of hospitals with robots. We also discuss the typical action of an acute-care hospital when it purchases expensive clinical medical equipment.
Sahaï, Aïsha; Pacherie, Elisabeth; Grynszpan, Ouriel; Berberian, Bruno
2017-01-01
Nowadays, interactions with others do not only involve human peers but also automated systems. Many studies suggest that the motor predictive systems that are engaged during action execution are also involved during joint actions with peers and during other human generated action observation. Indeed, the comparator model hypothesis suggests that the comparison between a predicted state and an estimated real state enables motor control, and by a similar functioning, understanding and anticipating observed actions. Such a mechanism allows making predictions about an ongoing action, and is essential to action regulation, especially during joint actions with peers. Interestingly, the same comparison process has been shown to be involved in the construction of an individual's sense of agency, both for self-generated and observed other human generated actions. However, the implication of such predictive mechanisms during interactions with machines is not consensual, probably due to the high heterogeneousness of the automata used in the experimentations, from very simplistic devices to full humanoid robots. The discrepancies that are observed during human/machine interactions could arise from the absence of action/observation matching abilities when interacting with traditional low-level automata. Consistently, the difficulties to build a joint agency with this kind of machines could stem from the same problem. In this context, we aim to review the studies investigating predictive mechanisms during social interactions with humans and with automated artificial systems. We will start by presenting human data that show the involvement of predictions in action control and in the sense of agency during social interactions. Thereafter, we will confront this literature with data from the robotic field. Finally, we will address the upcoming issues in the field of robotics related to automated systems aimed at acting as collaborative agents. PMID:29081744
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
The relationship between human agency and embodiment.
Caspar, Emilie A; Cleeremans, Axel; Haggard, Patrick
2015-05-01
Humans regularly feel a sense of agency (SoA) over events where the causal link between action and outcome is extremely indirect. We have investigated how intermediate (here, a robotic hand) events that intervene between action and outcome may alter SoA, using intentional binding measures. The robotic hand either performed the same movement as the participant (active congruent), or performed a similar movement with another finger (active incongruent). Binding was significantly reduced in the active incongruent relative to the active congruent condition, suggesting that altered embodiment influences SoA. However, binding effects were comparable between a condition where the robot hand made a congruent movement, and conditions where no robot hand was involved, suggesting that intermediate and embodied events do not reduce SoA. We suggest that human sense of agency involves both statistical associations between intentions and arbitrary outcomes, and an effector-specific matching of sensorimotor means used to achieve the outcome. Copyright © 2015 Elsevier Inc. All rights reserved.
Architecture for reactive planning of robot actions
NASA Astrophysics Data System (ADS)
Riekki, Jukka P.; Roening, Juha
1995-01-01
In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.
Neuro-cognitive mechanisms of decision making in joint action: a human-robot interaction study.
Bicho, Estela; Erlhagen, Wolfram; Louro, Luis; e Silva, Eliana Costa
2011-10-01
In this paper we present a model for action preparation and decision making in cooperative tasks that is inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. It implements the coordination of actions and goals among the partners as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of others' motor behavior. The control architecture is formalized by a system of coupled dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode task-relevant information about action means, task goals and context in the form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic model of joint action is evaluated in a task in which a robot and a human jointly construct a toy object. We show that the highly context sensitive mapping from action observation onto appropriate complementary actions allows coping with dynamically changing joint action situations. Copyright © 2010 Elsevier B.V. All rights reserved.
Stability Study of Anthropomorphic Robot Antares under External Load Action
NASA Astrophysics Data System (ADS)
Kodyakov, A. S.; Pavlyuk, N. A.; Budkov, V. Yu; Prakapovich, R. A.
2017-01-01
The paper presents the study of the behavior of the major structural elements of the lower limbs of anthropomorphic robot Antares under the influence of different types of loads (torsion, fracture). We have determined the required values for actuators torques for motion of the robot in space. The maximum values of torques are 5 Nm and 5.2 Nm respectively, and are able to withstand the upper and lower leg structures.
Problems and research issues associated with the hybrid control of force and displacement
NASA Technical Reports Server (NTRS)
Paul, R. P.
1987-01-01
The hybrid control of force and position is basic to the science of robotics but is only poorly understood. Before much progress can be made in robotics, this problem needs to be solved in a robust manner. However, the use of hybrid control implies the existence of a model of the environment, not an exact model (as the function of hybrid control is to accommodate these errors), but a model appropriate for planning and reasoning. The monitored forces in position control are interpreted in terms of a model of the task as are the monitored displacements in force control. The reaction forces of the task of writing are far different from those of hammering. The programming of actions in such a modeled world becomes more complicated and systems of task level programming need to be developed. Sensor based robotics, of which force sensing is the most basic, implies an entirely new level of technology. Indeed, robot force sensors, no matter how compliant they may be, must be protected from accidental collisions. This implies other sensors to monitor task execution and again the use of a world model. This new level of technology is the task level, in which task actions are specified, not the actions of individual sensors and manipulators.
Nishimoto, Ryunosuke; Tani, Jun
2009-07-01
The current paper shows a neuro-robotics experiment on developmental learning of goal-directed actions. The robot was trained to predict visuo-proprioceptive flow of achieving a set of goal-directed behaviors through iterative tutor training processes. The learning was conducted by employing a dynamic neural network model which is characterized by their multiple time-scale dynamics. The experimental results showed that functional hierarchical structures emerge through stages of developments where behavior primitives are generated in earlier stages and their sequences of achieving goals appear in later stages. It was also observed that motor imagery is generated in earlier stages compared to actual behaviors. Our claim that manipulatable inner representation should emerge through the sensory-motor interactions is corresponded to Piaget's constructivist view.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
NASA's Mars 2020 Rover Artist's Concept #3
2017-11-17
This artist's rendition depicts NASA's Mars 2020 rover studying rocks with its robotic arm. The mission will not only seek out and study an area likely to have been habitable in the distant past, but it will take the next, bold step in robotic exploration of the Red Planet by seeking signs of past microbial life itself. Mars 2020 will use powerful instruments to investigate rocks on Mars down to the microscopic scale of variations in texture and composition. It will also acquire and store samples of the most promising rocks and soils that it encounters, and set them aside on the surface of Mars. A future mission could potentially return these samples to Earth. Mars 2020 is targeted for launch in July/August 2020 aboard an Atlas V-541 rocket from Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida. https://photojournal.jpl.nasa.gov/catalog/PIA22106
Towards multi-platform software architecture for Collaborative Teleoperation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domingues, Christophe; Otmane, Samir; Davesne, Frederic
2009-03-05
Augmented Reality (AR) can provide to a Human Operator (HO) a real help in achieving complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robotmore » simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots.« less
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
NASA Technical Reports Server (NTRS)
Agah, Arvin; Bekey, George A.
1994-01-01
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
A robotic orbital emulator with lidar-based SLAM and AMCL for multiple entity pose estimation
NASA Astrophysics Data System (ADS)
Shen, Dan; Xiang, Xingyu; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh
2018-05-01
This paper revises and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motions. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motorcontrolled- ball along a rod (robotic arm), which is attached to the robot. Lidar only measurements are used to estimate the pose information of the multiple robots. SLAM (simultaneous localization and mapping) is running on one robot to generate the map and compute the pose for the robot. Based on the SLAM map maintained by the robot, the other robots run the adaptive Monte Carlo localization (AMCL) method to estimate their poses. The controller is designed to guide the robot to follow a given orbit. The controllability is analyzed by using a feedback linearization method. Experiments are conducted to show the convergence of AMCL and the orbit tracking performance.
Soft Robotics: from scientific challenges to technological applications
NASA Astrophysics Data System (ADS)
Laschi, C.
2016-05-01
Soft robotics is a recent and rapidly growing field of research, which aims at unveiling the principles for building robots that include soft materials and compliance in the interaction with the environment, so as to exploit so-called embodied intelligence and negotiate natural environment more effectively. Using soft materials for building robots poses new technological challenges: the technologies for actuating soft materials, for embedding sensors into soft robot parts, for controlling soft robots are among the main ones. This is stimulating research in many disciplines and many countries, such that a wide community is gathering around initiatives like the IEEE TAS TC on Soft Robotics and the RoboSoft CA - A Coordination Action for Soft Robotics, funded by the European Commission. Though still in its early stages of development, soft robotics is finding its way in a variety of applications, where safe contact is a main issue, in the biomedical field, as well as in exploration tasks and in the manufacturing industry. And though the development of the enabling technologies is still a priority, a fruitful loop is growing between basic research and application-oriented research in soft robotics.
Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.
Wang, Zhijun; Mirdamadi, Reza; Wang, Qing
2016-01-01
Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.
A satellite orbital testbed for SATCOM using mobile robots
NASA Astrophysics Data System (ADS)
Shen, Dan; Lu, Wenjie; Wang, Zhonghai; Jia, Bin; Wang, Gang; Wang, Tao; Chen, Genshe; Blasch, Erik; Pham, Khanh
2016-05-01
This paper develops and evaluates a satellite orbital testbed (SOT) for satellite communications (SATCOM). SOT can emulate the 3D satellite orbit using the omni-wheeled robots and a robotic arm. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The former actions are emulated by omni-wheeled robots while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. The emulated satellite positions will go to the measure model, whose results will be used to perform multiple space object tracking. Then the tracking results will go to the maneuver detection and collision alert. The satellite maneuver commands will be translated to robots commands and robotic arm commands. In SATCOM, the effects of jamming depend on the range and angles of the positions of satellite transponder relative to the jamming satellite. We extend the SOT to include USRP transceivers. In the extended SOT, the relative ranges and angles are implemented using omni-wheeled robots and robotic arms.
Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks
Wang, Zhijun; Mirdamadi, Reza; Wang, Qing
2016-01-01
Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building. PMID:28540284
Jones, Raya A
2017-08-01
Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.
NASA Astrophysics Data System (ADS)
Cameron, Jonathan M.; Arkin, Ronald C.
1992-02-01
As mobile robots are used in more uncertain and dangerous environments, it will become important to design them so that they can survive falls. In this paper, we examine a number of mechanisms and strategies that animals use to withstand these potentially catastrophic events and extend them to the design of robots. A brief survey of several aspects of how common cats survive falls provides an understanding of the issues involved in preventing traumatic injury during a falling event. After outlining situations in which robots might fall, a number of factors affecting their survival are described. From this background, several robot design guidelines are derived. These include recommendations for the physical structure of the robot as well as requirements for the robot control architecture. A control architecture is proposed based on reactive control techniques and action-oriented perception that is geared to support this form of survival behavior.
Fast Grasp Contact Computation for a Serial Robot
NASA Technical Reports Server (NTRS)
Hargrave, Brian (Inventor); Shi, Jianying (Inventor); Diftler, Myron A. (Inventor)
2015-01-01
A system includes a controller and a serial robot having links that are interconnected by a joint, wherein the robot can grasp a three-dimensional (3D) object in response to a commanded grasp pose. The controller receives input information, including the commanded grasp pose, a first set of information describing the kinematics of the robot, and a second set of information describing the position of the object to be grasped. The controller also calculates, in a two-dimensional (2D) plane, a set of contact points between the serial robot and a surface of the 3D object needed for the serial robot to achieve the commanded grasp pose. A required joint angle is then calculated in the 2D plane between the pair of links using the set of contact points. A control action is then executed with respect to the motion of the serial robot using the required joint angle.
NASA Technical Reports Server (NTRS)
Cameron, Jonathan M.; Arkin, Ronald C.
1992-01-01
As mobile robots are used in more uncertain and dangerous environments, it will become important to design them so that they can survive falls. In this paper, we examine a number of mechanisms and strategies that animals use to withstand these potentially catastrophic events and extend them to the design of robots. A brief survey of several aspects of how common cats survive falls provides an understanding of the issues involved in preventing traumatic injury during a falling event. After outlining situations in which robots might fall, a number of factors affecting their survival are described. From this background, several robot design guidelines are derived. These include recommendations for the physical structure of the robot as well as requirements for the robot control architecture. A control architecture is proposed based on reactive control techniques and action-oriented perception that is geared to support this form of survival behavior.
Weinstein, Ronald S; Graham, Anna R; Lian, Fangru; Braunhut, Beth L; Barker, Gail R; Krupinski, Elizabeth A; Bhattacharyya, Achyut K
2012-04-01
Telepathology, the distant service component of digital pathology, is a growth industry. The word "telepathology" was introduced into the English Language in 1986. Initially, two different, competing imaging modalities were used for telepathology. These were dynamic (real time) robotic telepathology and static image (store-and-forward) telepathology. In 1989, a hybrid dynamic robotic/static image telepathology system was developed in Norway. This hybrid imaging system bundled these two primary pathology imaging modalities into a single multi-modality pathology imaging system. Similar hybrid systems were subsequently developed and marketed in other countries as well. It is noteworthy that hybrid dynamic robotic/static image telepathology systems provided the infrastructure for the first truly sustainable telepathology services. Since then, impressive progress has been made in developing another telepathology technology, so-called "virtual microscopy" telepathology (also called "whole slide image" telepathology or "WSI" telepathology). Over the past decade, WSI has appeared to be emerging as the preferred digital telepathology digital imaging modality. However, recently, there has been a re-emergence of interest in dynamic-robotic telepathology driven, in part, by concerns over the lack of a means for up-and-down focusing (i.e., Z-axis focusing) using early WSI processors. In 2010, the initial two U.S. patents for robotic telepathology (issued in 1993 and 1994) expired enabling many digital pathology equipment companies to incorporate dynamic-robotic telepathology modules into their WSI products for the first time. The dynamic-robotic telepathology module provided a solution to the up-and-down focusing issue. WSI and dynamic robotic telepathology are now, rapidly, being bundled into a new class of telepathology/digital pathology imaging system, the "WSI-enhanced dynamic robotic telepathology system". To date, six major WSI processor equipment companies have embraced the approach and developed WSI-enhanced dynamic-robotic digital telepathology systems, marketed under a variety of labels. Successful commercialization of such systems could help overcome the current resistance of some pathologists to incorporate digital pathology, and telepathology, into their routine and esoteric laboratory services. Also, WSI-enhanced dynamic robotic telepathology could be useful for providing general pathology and subspecialty pathology services to many of the world's underserved populations in the decades ahead. This could become an important enabler for the delivery of patient-centered healthcare in the future. © 2012 The Authors APMIS © 2012 APMIS.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... INTERNATIONAL TRADE COMMISSION [Docket No. 2930] Certain Robotic Toys and Components Thereof.... International Trade Commission. ACTION: Notice. SUMMARY: Notice is hereby given that the U.S. International.... International Trade Commission, 500 E Street SW., Washington, DC 20436, telephone (202) 205-2000. The public...
Using robotics construction kits as metacognitive tools: a research in an Italian primary school.
La Paglia, Filippo; Caci, Barbara; La Barbera, Daniele; Cardaci, Maurizio
2010-01-01
The present paper is aimed at analyzing the process of building and programming robots as a metacognitive tool. Quantitative data and qualitative observations from a research performed in a sample of children attending an Italian primary school are described in this work. Results showed that robotics activities may be intended as a new metacognitive environment that allows children to monitor themselves and control their learning actions in an autonomous and self-centered way.
Learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.
2003-10-01
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
Recognizing surgeon's actions during suture operations from video sequences
NASA Astrophysics Data System (ADS)
Li, Ye; Ohya, Jun; Chiba, Toshio; Xu, Rong; Yamashita, Hiromasa
2014-03-01
Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon's hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words' frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines "sliding window" with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while "Sliding window" did not show significant improvements for suture and tying and cannot recognize negative actions.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Emergent of Burden Sharing of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Kusano, Takuya; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. Multi robots system is able to adapt to various circumstances and has a flexibility for variation of tasks. Robots are necessary that build a cooperative relations and acts as an organization to attain a purpose in multi robots system. Then, group behavior of insects which doesn't have advanced ability is observed. For example, ants called a sociality insect emerge systematic activities by the interaction with using a very simple way. Though ants make a communication with chemical matter, a human plans a communication by words and gestures. In this paper, we paid attention to the interaction based on psychological viewpoint. And a human's emotion model was used for the parameter which became a base of the motion planning of robots. These robots were made to do both-way action in test field with obstacle. As a result, a burden sharing like guide or carrier was seen even though those had a simple setup.
NASA Technical Reports Server (NTRS)
Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel
2015-01-01
The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.
Evaluation of teleoperated surgical robots in an enclosed undersea environment.
Doarn, Charles R; Anvari, Mehran; Low, Thomas; Broderick, Timothy J
2009-05-01
The ability to support surgical care in an extreme environment is a significant issue for both military medicine and space medicine. Telemanipulation systems, those that can be remotely operated from a distant site, have been used extensively by the National Aeronautics and Space Administration (NASA) for a number of years. These systems, often called telerobots, have successfully been applied to surgical interventions. A further extension is to operate these robotic systems over data communication networks where robotic slave and master are separated by a great distance. NASA utilizes the National Oceanographic and Atmospheric Administration (NOAA) Aquarius underwater habitat as an analog environment for research and technology evaluation missions, known as NASA Extreme Environment Mission Operations (NEEMO). Three NEEMO missions have provided an opportunity to evaluate teleoperated surgical robotics by astronauts and surgeons. Three robotic systems were deployed to the habitat for evaluation during NEEMO 7, 9, and 12. These systems were linked via a telecommunications link to various sites for remote manipulation. Researchers in the habitat conducted a variety of tests to evaluate performance and applicability in extreme environments. Over three different NEEMO missions, components of the Automated Endoscopic System for Optimal Positioning (AESOP), the M7 Surgical System, and the RAVEN were deployed and evaluated. A number of factors were evaluated, including communication latency and semiautonomous functions. The M7 was modified to permit a remote surgeon the ability to insert a needle into simulated tissue with ultrasound guidance, resulting in the world's first semi-autonomous supervisory-controlled medical task. The deployment and operation of teleoperated surgical systems and semi-autonomous, supervisory-controlled tasks were successfully conducted.
NASA Astrophysics Data System (ADS)
Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting
2013-01-01
The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.
NASA Technical Reports Server (NTRS)
Firby, R. James
1990-01-01
High-level robot control research must confront the limitations imposed by real sensors if robots are to be controlled effectively in the real world. In particular, sensor limitations make it impossible to maintain a complete, detailed world model of the situation surrounding the robot. To address the problems involved in planning with the resulting incomplete and uncertain world models, traditional robot control architectures must be altered significantly. Task-directed sensing and control is suggested as a way of coping with world model limitations by focusing sensing and analysis resources on only those parts of the world relevant to the robot's active goals. The RAP adaptive execution system is used as an example of a control architecture designed to deploy sensing resources in this way to accomplish both action and knowledge goals.
Using qualitative maps to direct reactive robots
NASA Technical Reports Server (NTRS)
Bertin, Randolph; Pendleton, Tom
1992-01-01
The principal advantage of mobile robots is that they are able to go to specific locations to perform useful tasks rather than have the tasks brought to them. It is important therefore that the robot be used to reach desired locations efficiently and reliably. A mobile robot whose environment extends significantly beyond its sensory horizon must maintain a representation of the environment, a map, in order to attain these efficiency and reliability requirements. We believe that qualitative mapping methods provide useful and robust representation schemes and that such maps may be used to direct the actions of a reactively controlled robot. In this paper we describe our experience in employing qualitative maps to direct, through the selection of desired control strategies, a reactive-behavior based robot. This mapping capability represents the development of one aspect of a successful deliberative/reactive hybrid control architecture.
Artificial heart for humanoid robot
NASA Astrophysics Data System (ADS)
Potnuru, Akshay; Wu, Lianjun; Tadesse, Yonas
2014-03-01
A soft robotic device inspired by the pumping action of a biological heart is presented in this study. Developing artificial heart to a humanoid robot enables us to make a better biomedical device for ultimate use in humans. As technology continues to become more advanced, the methods in which we implement high performance and biomimetic artificial organs is getting nearer each day. In this paper, we present the design and development of a soft artificial heart that can be used in a humanoid robot and simulate the functions of a human heart using shape memory alloy technology. The robotic heart is designed to pump a blood-like fluid to parts of the robot such as the face to simulate someone blushing or when someone is angry by the use of elastomeric substrates and certain features for the transport of fluids.
Long Range Navigation for Mars Rovers Using Sensor-Based Path Planning and Visual Localisation
NASA Technical Reports Server (NTRS)
Laubach, Sharon L.; Olson, Clark F.; Burdick, Joel W.; Hayati, Samad
1999-01-01
The Mars Pathfinder mission illustrated the benefits of including a mobile robotic explorer on a planetary mission. However, for future Mars rover missions, significantly increased autonomy in navigation is required in order to meet demanding mission criteria. To address these requirements, we have developed new path planning and localisation capabilities that allow a rover to navigate robustly to a distant landmark. These algorithms have been implemented on the JPL Rocky 7 prototype microrover and have been tested extensively in the JPL MarsYard, as well as in natural terrain.
Visual control of navigation in insects and its relevance for robotics.
Srinivasan, Mandyam V
2011-08-01
Flying insects display remarkable agility, despite their diminutive eyes and brains. This review describes our growing understanding of how these creatures use visual information to stabilize flight, avoid collisions with objects, regulate flight speed, detect and intercept other flying insects such as mates or prey, navigate to a distant food source, and orchestrate flawless landings. It also outlines the ways in which these insights are now being used to develop novel, biologically inspired strategies for the guidance of autonomous, airborne vehicles. Copyright © 2011 Elsevier Ltd. All rights reserved.
In good company? Perception of movement synchrony of a non-anthropomorphic robot.
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot's likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants' perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.
Social interaction enhances motor resonance for observed human actions.
Hogeveen, Jeremy; Obhi, Sukhvinder S
2012-04-25
Understanding the neural basis of social behavior has become an important goal for cognitive neuroscience and a key aim is to link neural processes observed in the laboratory to more naturalistic social behaviors in real-world contexts. Although it is accepted that mirror mechanisms contribute to the occurrence of motor resonance (MR) and are common to action execution, observation, and imitation, questions remain about mirror (and MR) involvement in real social behavior and in processing nonhuman actions. To determine whether social interaction primes the MR system, groups of participants engaged or did not engage in a social interaction before observing human or robotic actions. During observation, MR was assessed via motor-evoked potentials elicited with transcranial magnetic stimulation. Compared with participants who did not engage in a prior social interaction, participants who engaged in the social interaction showed a significant increase in MR for human actions. In contrast, social interaction did not increase MR for robot actions. Thus, naturalistic social interaction and laboratory action observation tasks appear to involve common MR mechanisms, and recent experience tunes the system to particular agent types.
Serendipitous Offline Learning in a Neuromorphic Robot.
Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg
2016-01-01
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
Neural architectures for robot intelligence.
Ritter, H; Steil, J J; Nölker, C; Röthling, F; McGuire, P
2003-01-01
We argue that direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data-mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our laboratory in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando
2009-01-01
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146
The efficacy of using human myoelectric signals to control the limbs of robots in space
NASA Technical Reports Server (NTRS)
Clark, Jane E.; Phillips, Sally J.
1988-01-01
This project was designed to investigate the usefulness of the myoelectric signal as a control in robotics applications. More specifically, the neural patterns associated with human arm and hand actions were studied to determine the efficacy of using these myoelectric signals to control the manipulator arm of a robot. The advantage of this approach to robotic control was the use of well-defined and well-practiced neural patterns already available to the system, as opposed to requiring the human operator to learn new tasks and establish new neural patterns in learning to control a joystick or mechanical coupling device.
In Good Company? Perception of Movement Synchrony of a Non-Anthropomorphic Robot
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot’s likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants’ perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot. PMID:26001025
Monitored execution of robot plans produced by STRIPS.
NASA Technical Reports Server (NTRS)
Fikes, R. E.
1972-01-01
We describe PLANEX1, a plan executor for the Stanford Research Institute robot system. The problem-solving program STRIPS creates a plan consisting of a sequence of actions, and PLANEX1 program carries out the plan by executing the actions. PLANEX1 is designed so that it executes only that portion of the plan necessary for completing the task, reexecutes any portion of the plan that has failed to achieve the desired results, and initiates replanning in situations where the plan can no longer be effective in completing the task. The scenario for an example plan execution is given.
Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Nomenclature in laboratory robotics and automation (IUPAC Recommendation 1994)
(Skip) Kingston, H. M.; Kingstonz, M. L.
1994-01-01
These recommended terms have been prepared to help provide a uniform approach to terminology and notation in laboratory automation and robotics. Since the terminology used in laboratory automation and robotics has been derived from diverse backgrounds, it is often vague, imprecise, and in some cases, in conflict with classical automation and robotic nomenclature. These dejinitions have been assembled from standards, monographs, dictionaries, journal articles, and documents of international organizations emphasizing laboratory and industrial automation and robotics. When appropriate, definitions have been taken directly from the original source and identified with that source. However, in some cases no acceptable definition could be found and a new definition was prepared to define the object, term, or action. Attention has been given to defining specific robot types, coordinate systems, parameters, attributes, communication protocols and associated workstations and hardware. Diagrams are included to illustrate specific concepts that can best be understood by visualization. PMID:18924684
Wu, Ya-Huei; Faucounau, Véronique; Boulay, Mélodie; Maestrutti, Marina; Rigaud, Anne-Sophie
2011-03-01
Researchers in robotics have been increasingly focusing on robots as a means of supporting older people with cognitive impairment at home. The aim of this study is to explore the elderly's needs and preferences towards having an assistive robot in the home. In order to ensure the appropriateness of this technology, 30 subjects aged 60 and older with memory complaints were recruited from the Memory Clinic of the Broca Hospital. We conducted an interview-administered questionnaire that included questions about their needs and preferences concerning robot functions and modes of action. The subjects reported a desire to retain their capacity to manage their daily activities, to maintain good health and to stimulate their memory. Regarding robot functions, the cognitive stimulation programme earned the highest proportion of positive responses, followed by the safeguarding functions, fall detection and the automatic help call. © The Author(s) 2010.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
Four robots vie for position on the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Student teams, shown behind protective walls, play defense by taking away competitors' pillows and generally harassing opposing machines. Two of the robots have lifted their caches of pillows above the field, a movement which earns them points. Along with the volunteer referees, at the edge of the playing field, judges at right watch the action. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.
Metalevel programming in robotics: Some issues
NASA Technical Reports Server (NTRS)
Kumarn, A.; Parameswaran, N.
1987-01-01
Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.
Person-like intelligent systems architectures for robotic shared control and automated operations
NASA Technical Reports Server (NTRS)
Erickson, Jon D.; Aucoin, Paschal J., Jr.; Ossorio, Peter G.
1992-01-01
An approach to rendering robotic systems as 'personlike' as possible to achieve needed capabilities is outlined. Human characteristics such as knowledge, motivation, know-how, performance, achievement and individual differences corresponding to propensities and abilities can be supplied, within limits, with computing software and hardware to robotic systems provided with sufficiently rich sensory configurations. Pushing these limits is the developmental path for more and more personlike robotic systems. The portions of the Person Concept that appear to be most directly relevant to this effort are described in the following topics: reality concepts (the state-of-affairs system and descriptive formats, behavior as intentional action, individual persons (person characteristics), social patterns of behavior (social practices), and boundary conditions (status maxims). Personlike robotic themes and considerations for a technical development plan are also discussed.
Grounding language in action and perception: From cognitive agents to humanoid robots
NASA Astrophysics Data System (ADS)
Cangelosi, Angelo
2010-06-01
In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.
Robotics in cardiac surgery: the Istanbul experience.
Sagbas, Ertan; Akpinar, Belhhan; Sanisoglu, Ilhan; Caynak, Baris; Guden, Mustafa; Ozbek, Ugur; Bayramoglu, Zehra; Bayindir, Osman
2006-06-01
Robots are sensor-based tools capable of performing precise, accurate and versatile actions. Initially designed to spare humans from risky tasks, robots have progressed into revolutionary tools for surgeons. Tele-operated robots, such as the da Vinci (Intuitive Surgical, Mountain View, CA), have allowed cardiac procedures to start benefiting from robotics as an enhancement to traditional minimally invasive surgery. The aim of this text was to discuss our experience with the da Vinci system during a 12 month period in which 61 cardiac patients were operated on. There were 59 coronary bypass patients (CABG) and two atrial septal defect (ASD) closures. Two patients (3.3%) had to be converted to median sternotomy because of pleural adhesions. There were no procedure- or device-related complications. Our experience suggests that robotics can be integrated into routine cardiac surgical practice. Systematic training, team dedication and proper patient selection are important factors that determine the success of a robotic surgery programme. Copyright 2006 John Wiley & Sons, Ltd.
An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant
NASA Astrophysics Data System (ADS)
Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter
2017-09-01
One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.
NASA Astrophysics Data System (ADS)
Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.
2016-08-01
This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.
A robot sets a table: a case for hybrid reasoning with different types of knowledge
NASA Astrophysics Data System (ADS)
Mansouri, Masoumeh; Pecora, Federico
2016-09-01
An important contribution of AI to Robotics is the model-centred approach, whereby competent robot behaviour stems from automated reasoning in models of the world which can be changed to suit different environments, physical capabilities and tasks. However models need to capture diverse (and often application-dependent) aspects of the robot's environment and capabilities. They must also have good computational properties, as robots need to reason while they act in response to perceived context. In this article, we investigate the use of a meta-CSP-based technique to interleave reasoning in diverse knowledge types. We reify the approach through a robotic waiter case study, for which a particular selection of spatial, temporal, resource and action KR formalisms is made. Using this case study, we discuss general principles pertaining to the selection of appropriate KR formalisms and jointly reasoning about them. The resulting integration is evaluated both formally and experimentally on real and simulated robotic platforms.
Robotic Intelligence Kernel: Driver
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.
Low-Cost Educational Robotics Applied to Physics Teaching in Brazil
ERIC Educational Resources Information Center
Souza, Marcos A. M.; Duarte, José R. R.
2015-01-01
In this paper, we propose some of the strategies and methodologies for teaching high-school physics topics through an educational robotics show. This exhibition was part of a set of actions promoted by a Brazilian government program of incentive for teaching activities, whose primary focus is the training of teachers, the improvement of teaching…
The bedside assistant in robotic surgery--keys to success.
Yuh, Bertram
2013-01-01
Taking on the position of bedside assistant for a surgical robotic team can be a daunting task. Keys to success include preparation, proper operation set up, effective use of instruments to augment the actions of the console surgeon, and readiness for surgical emergencies. Effective communication, repetitive execution, and readiness facilitate the efforts of the surgical team.
Assembly, Tuning, and Transfer of Action Systems in Infants and Robots
ERIC Educational Resources Information Center
Berthouze, Luc; Goldfield, Eugene C.
2008-01-01
This paper seeks to foster a discussion on whether experiments with robots can inform theory in infant motor development and specifically (1) how the interactions among the parts of a system, including the nervous and musculoskeletal systems and the forces acting on the body, induce organizational changes in the whole, and (2) how exploratory…
Independent Review Support for Phoenix Mars Mission Robotic Arm Brush Motor Failure
NASA Technical Reports Server (NTRS)
McManamen, John P.; Pellicciotti, Joseph; DeKramer, Cornelis; Dube, Michael J.; Peeler, Deborah; Muirhead, Brian K.; Sevilla, Donald R.; Sabahi, Dara; Knopp, Michael D.
2007-01-01
The Phoenix Project requested the NASA Engineering and Safety Center (NESC) perform an independent peer review of the Robotic Arm (RA) Direct Current (DC) motor brush anomalies that originated during the Mars Exploration Rover (MER) Project and recurred during the Phoenix Project. The request was to evaluate the Phoenix Project investigation efforts and provide an independent risk assessment. This includes a recommendation for additional work and assessment of the flight worthiness of the RA DC motors. Based on the investigation and findings contained within this report, the IRT concurs with the risk assessment Failure Cause / Corrective Action (FC/CA) by the project, "Failure Effect Rating "3"; Major Degradation or Total Loss of Function, Failure Cause/Corrective Action Rating Currently "4"; Unknown Cause, Uncertainty in Corrective Action."
Applications of artificial intelligence in safe human-robot interactions.
Najmaei, Nima; Kermani, Mehrdad R
2011-04-01
The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.
Integration of task level planning and diagnosis for an intelligent robot
NASA Technical Reports Server (NTRS)
Gerstenfeld, Arthur
1988-01-01
The use of robots in the future must go beyond present applications and will depend on the ability of a robot to adapt to a changing environment and to deal with unexpected scenarios (i.e., picking up parts that are not exactly where they were expected to be). The objective of this research is to demonstrate the feasibility of incorporating high level planning into a robot enabling it to deal with anomalous situations in order to minimize the need for constant human instruction. The heuristics can be used by a robot to apply information about previous actions towards accomplishing future objectives more efficiently. The system uses a decision network that represents the plan for accomplishing a task. This enables the robot to modify its plan based on results of previous actions. The system serves as a method for minimizing the need for constant human instruction in telerobotics. This paper describes the integration of expert systems and simulation as a valuable tool that goes far beyond this project. Simulation can be expected to be used increasingly as both hardware and software improve. Similarly, the ability to merge an expert system with simulation means that we can add intelligence to the system. A malfunctioning space satellite is described. The expert system uses a series of heuristics in order to guide the robot to the proper location. This is part of task level planning. The final part of the paper suggests directions for future research. Having shown the feasibility of an expert system embedded in a simulation, the paper then discusses how the system can be integrated with the MSFC graphics system.
Weintek interfaces for controlling the position of a robotic arm
NASA Astrophysics Data System (ADS)
Barz, C.; Ilia, M.; Ilut, T.; Pop-Vadean, A.; Pop, P. P.; Dragan, F.
2016-08-01
The paper presents the use of Weintek panels to control the position of a robotic arm, operated step by step on the three motor axes. PLC control interface is designed with a Weintek touch screen. The HMI Weintek eMT3070a is the user interface in the process command of the PLC. This HMI controls the local PLC, entering the coordinate on the axes X, Y and Z. The subject allows the development in a virtual environment for e-learning and monitoring the robotic arm actions.
Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior.
Ficocelli, Maurizio; Terao, Junichi; Nejat, Goldie
2016-12-01
The objective of a socially assistive robot is to create a close and effective interaction with a human user for the purpose of giving assistance. In particular, the social interaction, guidance, and support that a socially assistive robot can provide a person can be very beneficial to patient-centered care. However, there are a number of research issues that need to be addressed in order to design such robots. This paper focuses on developing effective emotion-based assistive behavior for a socially assistive robot intended for natural human-robot interaction (HRI) scenarios with explicit social and assistive task functionalities. In particular, in this paper, a unique emotional behavior module is presented and implemented in a learning-based control architecture for assistive HRI. The module is utilized to determine the appropriate emotions of the robot to display, as motivated by the well-being of the person, during assistive task-driven interactions in order to elicit suitable actions from users to accomplish a given person-centered assistive task. A novel online updating technique is used in order to allow the emotional model to adapt to new people and scenarios. Experiments presented show the effectiveness of utilizing robotic emotional assistive behavior during HRI scenarios.
Model learning for robot control: a survey.
Nguyen-Tuong, Duy; Peters, Jan
2011-11-01
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.
Hu, Xiao-Ling; Tong, Raymond Kai-yu; Ho, Newmen S K; Xue, Jing-jing; Rong, Wei; Li, Leonard S W
2015-09-01
Augmented physical training with assistance from robot and neuromuscular electrical stimulation (NMES) may introduce intensive motor improvement in chronic stroke. To compare the rehabilitation effectiveness achieved by NMES robot-assisted wrist training and that by robot-assisted training. This study was a single-blinded randomized controlled trial with a 3-month follow-up. Twenty-six hemiplegic subjects with chronic stroke were randomly assigned to receive 20-session wrist training with an electromyography (EMG)-driven NMES robot (NMES robot group, n = 11) and with an EMG-driven robot (robot group, n = 15), completed within 7 consecutive weeks. Clinical scores, Fugl-Meyer Assessment (FMA), Modified Ashworth Score (MAS), and Action Research Arm Test (ARAT) were used to evaluate the training effects before and after the training, as well as 3 months later. An EMG parameter, muscle co-contraction index, was also applied to investigate the session-by-session variation in muscular coordination patterns during the training. The improvement in FMA (shoulder/elbow, wrist/hand) obtained in the NMES robot group was more significant than the robot group (P < .05). Significant improvement in ARAT was achieved in the NMES robot group (P < .05) but absent in the robot group. NMES robot-assisted training showed better performance in releasing muscle co-contraction than the robot-assisted across the training sessions (P < .05). The NMES robot-assisted wrist training was more effective than the pure robot. The additional NMES application in the treatment could bring more improvements in the distal motor functions and faster rehabilitation progress. © The Author(s) 2014.
Motor contagion during human-human and human-robot interaction.
Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry
2014-01-01
Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.
Motor Contagion during Human-Human and Human-Robot Interaction
Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry
2014-01-01
Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990
Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer
Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan
2012-01-01
Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853
Developmental Approach for Behavior Learning Using Primitive Motion Skills.
Dawood, Farhan; Loo, Chu Kiong
2018-05-01
Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.
Reactive, Safe Navigation for Lunar and Planetary Robots
NASA Technical Reports Server (NTRS)
Utz, Hans; Ruland, Thomas
2008-01-01
When humans return to the moon, Astronauts will be accompanied by robotic helpers. Enabling robots to safely operate near astronauts on the lunar surface has the potential to significantly improve the efficiency of crew surface operations. Safely operating robots in close proximity to astronauts on the lunar surface requires reactive obstacle avoidance capabilities not available on existing planetary robots. In this paper we present work on safe, reactive navigation using a stereo based high-speed terrain analysis and obstacle avoidance system. Advances in the design of the algorithms allow it to run terrain analysis and obstacle avoidance algorithms at full frame rate (30Hz) on off the shelf hardware. The results of this analysis are fed into a fast, reactive path selection module, enforcing the safety of the chosen actions. The key components of the system are discussed and test results are presented.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
Student teams behind protective walls operate remote controls to maneuver their robots around the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. The robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Teams played defense by taking away competitors' pillows and generally harassing opposing machines. On the side of the field are the judges, including (far left) Deputy Director for Launch and Payload Processing Loren Shriver and former KSC Director of Shuttle Processing Robert Sieck. A giant screen TV displays the action on the field. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.
Motion generation of peristaltic mobile robot with particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Homma, Takahiro; Kamamichi, Norihiro
2015-03-01
In developments of robots, bio-mimetics is attracting attention, which is a technology for the design of the structure and function inspired from biological system. There are a lot of examples of bio-mimetics in robotics such as legged robots, flapping robots, insect-type robots, fish-type robots. In this study, we focus on the motion of earthworm and aim to develop a peristaltic mobile robot. The earthworm is a slender animal moving in soil. It has a segmented body, and each segment can be shorted and lengthened by muscular actions. It can move forward by traveling expanding motions of each segment backward. By mimicking the structure and motion of the earthworm, we can construct a robot with high locomotive performance against an irregular ground or a narrow space. In this paper, to investigate the motion analytically, a dynamical model is introduced, which consist of a series-connected multi-mass model. Simple periodic patterns which mimic the motions of earthworms are applied in an open-loop fashion, and the moving patterns are verified through numerical simulations. Furthermore, to generate efficient motion of the robot, a particle swarm optimization algorithm, one of the meta-heuristic optimization, is applied. The optimized results are investigated by comparing to simple periodic patterns.
ERIC Educational Resources Information Center
Casler-Failing, Shelli L.
2017-01-01
This mixed methods, action research case study sought to investigate the effects of incorporating LEGO robotics into a seventh grade mathematics curriculum focused on the development of proportional reasoning through the lens of Social Constructivist Theory. Quantitative data was collected via pre- and post-tests from the mathematics class of six…
Tutorial Workshop on Robotics and Robot Control.
1982-10-26
J^V7S US ARMY TANK-AUTOMOTIVE COMMAND, WARREN MICHIGAN US ARMY MATERIEL SYSTEMS ANALYSIS ACTIVITY, ABERDEEN PROVING GROUNDS, MARYLAND ^ V&S...Technology Pasadena, California 91103 M. Vur.kovic Senior Research Associate Institute for Technoeconomic Systems Department of Industrial...Further investigation of the action precedence graphs together with their appli- cation to more complex manipulator tasks and analysis of J2. their
Can Robots Help the Learning of Skilled Actions?
Reinkensmeyer, David J.; Patton, James L.
2010-01-01
Learning to move skillfully requires that the motor system adjusts muscle commands based on ongoing performance errors, a process influenced by the dynamics of the task being practiced. Recent experiments from our laboratories show how robotic devices can temporarily alter task dynamics in ways that contribute to the motor learning experience, suggesting possible applications in rehabilitation and sports training. PMID:19098524
Telerobotics for Human Exploration: Enhancing Crew Capabilities in Deep Space
NASA Technical Reports Server (NTRS)
Fong, Terrence
2013-01-01
Future space missions in Earth orbit, to the Moon, and to other distant destinations offer many new opportunities for exploration. But, astronaut time will always be limited and some work will not be feasible or efficient for humans to perform manually. Telerobots, however, can complement human explorers, performing work under remote control from Earth, orbit or nearby habitats. A central challenge, therefore, is to understand how humans and remotely operated robots can be jointly employed to maximize mission performance and success. This presentation provides an overview of the key issues with using telerobots for human exploration.
An Intelligent Catheter System Robotic Controlled Catheter System
Negoro, M.; Tanimoto, M.; Arai, F.; Fukuda, T.; Fukasaku, K.; Takahashi, I.; Miyachi, S.
2001-01-01
Summary We have developed a novel catheter system, an intelligent catheter system, which is able to control a catheter by an externally-placed controller. This system has made from master-slave mechanism and has following three components; 1) a joy stick as a master (for operators) 2)a catheter controller as a slave (for a patient),3)a micro force sensor as a sensing device. This catheter tele-guiding system has abilities to perform intravascular procedures from the distant places. It may help to reduce the radiation exposures to the operators and also to help train young doctors. PMID:20663387
Science Operations During Planetary Surface Exploration: Desert-RATS Tests 2009-2011
NASA Technical Reports Server (NTRS)
Cohen, Barbara
2012-01-01
NASA s Research and Technology Studies (RATS) team evaluates technology, human-robotic systems and extravehicular equipment for use in future human space exploration missions. Tests are conducted in simulated space environments, or analog tests, using prototype instruments, vehicles, and systems. NASA engineers, scientists and technicians from across the country gather annually with representatives from industry and academia to perform the tests. Test scenarios include future missions to near-Earth asteroids (NEA), the moon and Mars.. Mission simulations help determine system requirements for exploring distant locations while developing the technical skills required of the next generation of explorers.
Development of a vision non-contact sensing system for telerobotic applications
NASA Astrophysics Data System (ADS)
Karkoub, M.; Her, M.-G.; Ho, M.-I.; Huang, C.-C.
2013-08-01
The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags' motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.
2002-07-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.
Quilodrán, Claudio S.; Currat, Mathias; Montoya-Burgos, Juan I.
2014-01-01
Interspecific hybridization is common in nature but can be increased in frequency or even originated by human actions, such as species introduction or habitat modification, which may threaten species persistence. When hybridization occurs between distantly related species, referred to as “distant hybridization,” the resulting hybrids are generally infertile or fertile but do not undergo chromosomal recombination during gametogenesis. Here, we present a model describing this frequent but poorly studied interspecific hybridization to assess its consequences on parental species and to anticipate the conditions under which they can reach extinction. Our general model fully incorporates three important processes: density-dependent competition, dominance/recessivity inheritance of traits and assortative mating. We demonstrate its use and flexibility by assessing population extinction risk between Atlantic salmon and brown trout in Norway, whose interbreeding has recently increased due to farmed fish releases into the wild. We identified the set of conditions under which hybridization may threaten salmonid species. Thanks to the flexibility of our model, we evaluated the effect of an additional risk factor, a parasitic disease, and showed that the cumulative effects dramatically increase the extinction risk. The consequences of distant hybridization are not genetically, but demographically mediated. Our general model is useful to better comprehend the evolution of such hybrid systems and we demonstrated its importance in the field of conservation biology to set up management recommendations when this increasingly frequent type of hybridization is in action. PMID:25003336
Quilodrán, Claudio S; Currat, Mathias; Montoya-Burgos, Juan I
2014-01-01
Interspecific hybridization is common in nature but can be increased in frequency or even originated by human actions, such as species introduction or habitat modification, which may threaten species persistence. When hybridization occurs between distantly related species, referred to as "distant hybridization," the resulting hybrids are generally infertile or fertile but do not undergo chromosomal recombination during gametogenesis. Here, we present a model describing this frequent but poorly studied interspecific hybridization to assess its consequences on parental species and to anticipate the conditions under which they can reach extinction. Our general model fully incorporates three important processes: density-dependent competition, dominance/recessivity inheritance of traits and assortative mating. We demonstrate its use and flexibility by assessing population extinction risk between Atlantic salmon and brown trout in Norway, whose interbreeding has recently increased due to farmed fish releases into the wild. We identified the set of conditions under which hybridization may threaten salmonid species. Thanks to the flexibility of our model, we evaluated the effect of an additional risk factor, a parasitic disease, and showed that the cumulative effects dramatically increase the extinction risk. The consequences of distant hybridization are not genetically, but demographically mediated. Our general model is useful to better comprehend the evolution of such hybrid systems and we demonstrated its importance in the field of conservation biology to set up management recommendations when this increasingly frequent type of hybridization is in action.
Co-development of manner and path concepts in language, action, and eye-gaze behavior.
Lohan, Katrin S; Griffiths, Sascha S; Sciutti, Alessandra; Partmann, Tim C; Rohlfing, Katharina J
2014-07-01
In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate children's behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human-robot interaction. Copyright © 2014 Cognitive Science Society, Inc.
Grounding language in action and perception: from cognitive agents to humanoid robots.
Cangelosi, Angelo
2010-06-01
In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.
A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Periodic activations of behaviours and emotional adaptation in behaviour-based robotics
NASA Astrophysics Data System (ADS)
Burattini, Ernesto; Rossi, Silvia
2010-09-01
The possible modulatory influence of motivations and emotions is of great interest in designing robotic adaptive systems. In this paper, an attempt is made to connect the concept of periodic behaviour activations to emotional modulation, in order to link the variability of behaviours to the circumstances in which they are activated. The impact of emotion is studied, described as timed controlled structures, on simple but conflicting reactive behaviours. Through this approach it is shown that the introduction of such asynchronies in the robot control system may lead to an adaptation in the emergent behaviour without having an explicit action selection mechanism. The emergent behaviours of a simple robot designed with both a parallel and a hierarchical architecture are evaluated and compared.
Dynamics of underwater legged locomotion: modeling and experiments on an octopus-inspired robot.
Calisti, M; Corucci, F; Arienti, A; Laschi, C
2015-07-30
This paper studies underwater legged locomotion (ULL) by means of a robotic octopus-inspired prototype and its associated model. Two different types of propulsive actions are embedded into the robot model: reaction forces due to leg contact with the ground and hydrodynamic forces such as the drag arising from the sculling motion of the legs. Dynamic parameters of the model are estimated by means of evolutionary techniques and subsequently the model is exploited to highlight some distinctive features of ULL. Specifically, the separation between the center of buoyancy (CoB)/center of mass and density affect the stability and speed of the robot, whereas the sculling movements contribute to propelling the robot even when its legs are detached from the ground. The relevance of these effects is demonstrated through robotic experiments and model simulations; moreover, by slightly changing the position of the CoB in the presence of the same feed-forward activation, a number of different behaviors (i.e. forward and backward locomotion at different speeds) are achieved.
Study on the intelligent decision making of soccer robot side-wall behavior
NASA Astrophysics Data System (ADS)
Zhang, Xiaochuan; Shao, Guifang; Tan, Zhi; Li, Zushu
2007-12-01
Side-wall is the static obstacle in soccer robot game, reasonably making use of the Side-wall can improve soccer robot competitive ability. As a kind of artificial life, the Side-wall processing strategy of soccer robot is influenced by many factors, such as game state, field region, attacking and defending situation and so on, each factor also has different influence degree, so, the Side-wall behavior selection is an intelligent selecting process. From the view point of human simulated, based on the idea of Side-wall processing priority[1], this paper builds the priority function for Side-wall processing, constructs the action predicative model for Side-wall obstacle, puts forward the Side-wall processing strategy, and forms the Side-wall behavior selection mechanism. Through the contrasting experiment between the strategy applied and none, proves that this strategy can improve the soccer robot capacity, it is feasible and effective, and has positive meaning for soccer robot stepped study.
Shock and Vibration Control of a Golf-Swing Robot at Impacting the Ball
NASA Astrophysics Data System (ADS)
Hoshino, Yohei; Kobayashi, Yukinori
A golf swing robot is a kind of fast motion manipulator with a flexible link. A robot manipulator is greatly affected by Corioli's and centrifugal forces during fast motion. Nonlinearity due to these forces can have an adverse effect on the performance of feedback control. In the same way, ordinary state observers of a linear system cannot accurately estimate the states of nonlinear systems. This paper uses a state observer that considers disturbances to improve the performance of state estimation and feedback control. A mathematical model of the golf robot is derived by Hamilton's principle. A linear quadratic regulator (LQR) that considers the vibration of the club shaft is used to stop the robot during the follow-through action. The state observer that considers disturbances estimates accurate state variables when the disturbances due to Corioli's and centrifugal forces, and impact forces work on the robot. As a result, the performance of the state feedback control is improved. The study compares the results of the numerical simulations with experimental results.
Framework for robot skill learning using reinforcement learning
NASA Astrophysics Data System (ADS)
Wei, Yingzi; Zhao, Mingyang
2003-09-01
Robot acquiring skill is a process similar to human skill learning. Reinforcement learning (RL) is an on-line actor critic method for a robot to develop its skill. The reinforcement function has become the critical component for its effect of evaluating the action and guiding the learning process. We present an augmented reward function that provides a new way for RL controller to incorporate prior knowledge and experience into the RL controller. Also, the difference form of augmented reward function is considered carefully. The additional reward beyond conventional reward will provide more heuristic information for RL. In this paper, we present a strategy for the task of complex skill learning. Automatic robot shaping policy is to dissolve the complex skill into a hierarchical learning process. The new form of value function is introduced to attain smooth motion switching swiftly. We present a formal, but practical, framework for robot skill learning and also illustrate with an example the utility of method for learning skilled robot control on line.
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
Pereira, José N; Silva, Porfírio; Lima, Pedro U; Martinoli, Alcherio
2014-01-01
The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets-an extension of Petri nets that takes into account robot actions and sensing-to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.
Synthetic Ion Channels and DNA Logic Gates as Components of Molecular Robots.
Kawano, Ryuji
2018-02-19
A molecular robot is a next-generation biochemical machine that imitates the actions of microorganisms. It is made of biomaterials such as DNA, proteins, and lipids. Three prerequisites have been proposed for the construction of such a robot: sensors, intelligence, and actuators. This Minireview focuses on recent research on synthetic ion channels and DNA computing technologies, which are viewed as potential candidate components of molecular robots. Synthetic ion channels, which are embedded in artificial cell membranes (lipid bilayers), sense ambient ions or chemicals and import them. These artificial sensors are useful components for molecular robots with bodies consisting of a lipid bilayer because they enable the interface between the inside and outside of the molecular robot to function as gates. After the signal molecules arrive inside the molecular robot, they can operate DNA logic gates, which perform computations. These functions will be integrated into the intelligence and sensor sections of molecular robots. Soon, these molecular machines will be able to be assembled to operate as a mass microrobot and play an active role in environmental monitoring and in vivo diagnosis or therapy. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Amijoyo Mochtar, Andi
2018-02-01
Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.
Kimmig, Rainer; Aktas, Bahriye; Buderath, Paul; Wimberger, Pauline; Iannaccone, Antonella; Heubner, Martin
2013-08-16
The technique of compartment-based radical hysterectomy was originally described by M Höckel as total mesometrial resection (TMMR) for standard treatment of stage I and II cervical cancer. However, with regard to the ontogenetically-defined compartments of tumor development (Müllerian) and lymph drainage (Müllerian and mesonephric), compartments at risk may also be defined consistently in endometrial cancer. This is the first report in the literature on the compartment-based surgical approach to endometrial cancer. Peritoneal mesometrial resection (PMMR) with therapeutic lymphadenectomy (tLNE) as an ontogenetic, compartment-based oncologic surgery could be beneficial for patients in terms of surgical radicalness as well as complication rates; it can be standardized for compartment-confined tumors. Supported by M Höckel, PMMR was translated to robotic surgery (rPMMR) and described step-by-step in comparison to robotic TMMR (rTMMR). Patients (n = 42) were treated by rPMMR (n = 39) or extrafascial simple hysterectomy (n = 3) with/without bilateral pelvic and/or periaortic robotic therapeutic lymphadenectomy (rtLNE) for stage I to III endometrial cancer, according to International Federation of Gynecology and Obstetrics (FIGO) classification. Tumors were classified as intermediate/high-risk in 22 out of 40 patients (55%) and low-risk in 18 out of 40 patients (45%), and two patients showed other uterine malignancies. In 11 patients, no adjuvant external radiotherapy was performed, but chemotherapy was applied. No transition to open surgery was necessary. There were no intraoperative complications. The postoperative complication rate was 12% with venous thromboses, (n = 2), infected pelvic lymph cyst (n = 1), transient aphasia (n = 1) and transient dysfunction of micturition (n = 1). The mean difference in perioperative hemoglobin concentrations was 2.4 g/dL (± 1.2 g/dL) and one patient (2.4%) required transfusion. During follow-up (median 17 months), one patient experienced distant recurrence and one patient distant/regional recurrence of endometrial cancer (4.8%), but none developed isolated locoregional recurrence. There were two deaths from endometrial cancer during the observation period (4.8%). We conclude that rPMMR and rtLNE are feasible and safe with regard to perioperative morbidity, thus, it seems promising for the treatment of intermediate/high-risk endometrial cancer in terms of surgical radicalness and complication rates. This could be particularly beneficial for morbidly obese and seriously ill patients.
Improving Cognitive Skills of the Industrial Robot
NASA Astrophysics Data System (ADS)
Bezák, Pavol
2015-08-01
At present, there are plenty of industrial robots that are programmed to do the same repetitive task all the time. Industrial robots doing such kind of job are not able to understand whether the action is correct, effective or good. Object detection, manipulation and grasping is challenging due to the hand and object modeling uncertainties, unknown contact type and object stiffness properties. In this paper, the proposal of an intelligent humanoid hand object detection and grasping model is presented assuming that the object properties are known. The control is simulated in the Matlab Simulink/ SimMechanics, Neural Network Toolbox and Computer Vision System Toolbox.
Lewis, Matthew; Cañamero, Lola
2016-10-01
We present a robot architecture and experiments to investigate some of the roles that pleasure plays in the decision making (action selection) process of an autonomous robot that must survive in its environment. We have conducted three sets of experiments to assess the effect of different types of pleasure-related versus unrelated to the satisfaction of physiological needs-under different environmental circumstances. Our results indicate that pleasure, including pleasure unrelated to need satisfaction, has value for homeostatic management in terms of improved viability and increased flexibility in adaptive behavior.
NASA Astrophysics Data System (ADS)
Pini, Giovanni; Tuci, Elio
2008-06-01
In biology/psychology, the capability of natural organisms to learn from the observation/interaction with conspecifics is referred to as social learning. Roboticists have recently developed an interest in social learning, since it might represent an effective strategy to enhance the adaptivity of a team of autonomous robots. In this study, we show that a methodological approach based on artifcial neural networks shaped by evolutionary computation techniques can be successfully employed to synthesise the individual and social learning mechanisms for robots required to learn a desired action (i.e. phototaxis or antiphototaxis).
Sensor Control of Robot Arc Welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1983-01-01
The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.
Tegotae-based decentralised control scheme for autonomous gait transition of snake-like robots.
Kano, Takeshi; Yoshizawa, Ryo; Ishiguro, Akio
2017-08-04
Snakes change their locomotion patterns in response to the environment. This ability is a motivation for developing snake-like robots with highly adaptive functionality. In this study, a decentralised control scheme of snake-like robots that exhibited autonomous gait transition (i.e. the transition between concertina locomotion in narrow aisles and scaffold-based locomotion on unstructured terrains) was developed. Additionally, the control scheme was validated via simulations. A key insight revealed is that these locomotion patterns were not preprogrammed but emerged by exploiting Tegotae, a concept that describes the extent to which a perceived reaction matches a generated action. Unlike local reflexive mechanisms proposed previously, the Tegotae-based feedback mechanism enabled the robot to 'selectively' exploit environments beneficial for propulsion, and generated reasonable locomotion patterns. It is expected that the results of this study can form the basis to design robots that can work under unpredictable and unstructured environments.
Coordinated Dynamic Behaviors for Multirobot Systems With Collision Avoidance.
Sabattini, Lorenzo; Secchi, Cristian; Fantuzzi, Cesare
2017-12-01
In this paper, we propose a novel methodology for achieving complex dynamic behaviors in multirobot systems. In particular, we consider a multirobot system partitioned into two subgroups: 1) dependent and 2) independent robots. Independent robots are utilized as a control input, and their motion is controlled in such a way that the dependent robots solve a tracking problem, that is following arbitrarily defined setpoint trajectories, in a coordinated manner. The control strategy proposed in this paper explicitly addresses the collision avoidance problem, utilizing a null space-based behavioral approach: this leads to combining, in a non conflicting manner, the tracking control law with a collision avoidance strategy. The combination of these control actions allows the robots to execute their task in a safe way. Avoidance of collisions is formally proven in this paper, and the proposed methodology is validated by means of simulations and experiments on real robots.
1999-03-06
Four robots vie for position on the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Student teams, shown behind protective walls, play defense by taking away competitors' pillows and generally harassing opposing machines. Two of the robots have lifted their caches of pillows above the field, a movement which earns them points. Along with the volunteer referees, at the edge of the playing field, judges at right watch the action. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
Faucounau, Véronique; Wu, Ya-Huei; Boulay, Mélodie; Maestrutti, Marina; Rigaud, Anne-Sophie
2009-01-01
Older people are an important and growing sector of the population. This demographic change raises the profile of frailty and disability within the world's population. In such conditions, many old people need aides to perform daily activities. Most of the support is given by family members who are now a new target in the therapeutic approach. With advances in technology, robotics becomes increasingly important as a means of supporting older people at home. In order to ensure appropriate technology, 30 caregivers filled out a self-administered questionnaire including questions on needs to support their proxy and requirements concerning the robotic agent's functions and modes of action. This paper points out the functions to be integrated into the robot in order to support caregivers in the care of their proxy. The results also show that caregivers have a positive attitude towards robotic agents.
Army Logistician. Volume 38, Issue 6, November-December 2006
2006-12-01
functioning electrically, magnetically, or thermally; or performing self -diagnosis and self - healing actions)—will offer extraordinary capabilities for...receives sufficient information about a remote, real-world site (a battlefield) through a machine (a robot ) so that the user feels physically present at...collaborative planning. • Improved training and education because of ad- vances in virtual reality environments and perception capabilities. Robots have been
ERIC Educational Resources Information Center
Kamewari, K.; Kato, M.; Kanda, T.; Ishiguro, H.; Hiraki, K.
2005-01-01
Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation…
Complete low-cost implementation of a teleoperated control system for a humanoid robot.
Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena
2013-01-24
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.
Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot
Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena
2013-01-01
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029
Liu, Yali; Ji, Linhong
2018-02-01
Robot rehabilitation has been a primary therapy method for the urgent rehabilitation demands of paralyzed patients after a stroke. The parameters in rehabilitation training such as the range of the training, which should be adjustable according to each participant's functional ability, are the key factors influencing the effectiveness of rehabilitation therapy. Therapists design rehabilitation projects based on the semiquantitative functional assessment scales and their experience. But these therapies based on therapists' experience cannot be implemented in robot rehabilitation therapy. This paper modeled the global human-robot by Simulink in order to analyze the relationship between the parameters in robot rehabilitation therapy and the patients' movement functional abilities. We compared the shoulder and elbow angles calculated by simulation with the angles recorded by motion capture system while the healthy subjects completed the simulated action. Results showed there was a remarkable correlation between the simulation data and the experiment data, which verified the validity of the human-robot global Simulink model. Besides, the relationship between the circle radius in the drawing tasks in robot rehabilitation training and the active movement degrees of shoulder as well as elbow was also matched by a linear, which also had a remarkable fitting coefficient. The matched linear can be a quantitative reference for the robot rehabilitation training parameters.
Planning perception and action for cognitive mobile manipulators
NASA Astrophysics Data System (ADS)
Gaschler, Andre; Nogina, Svetlana; Petrick, Ronald P. A.; Knoll, Alois
2013-12-01
We present a general approach to perception and manipulation planning for cognitive mobile manipulators. Rather than hard-coding single purpose robot applications, a robot should be able to reason about its basic skills in order to solve complex problems autonomously. Humans intuitively solve tasks in real-world scenarios by breaking down abstract problems into smaller sub-tasks and use heuristics based on their previous experience. We apply a similar idea for planning perception and manipulation to cognitive mobile robots. Our approach is based on contingent planning and run-time sensing, integrated in our knowledge of volumes" planning framework, called KVP. Using the general-purpose PKS planner, we model information-gathering actions at plan time that have multiple possible outcomes at run time. As a result, perception and sensing arise as necessary preconditions for manipulation, rather than being hard-coded as tasks themselves. We demonstrate the e ectiveness of our approach on two scenarios covering visual and force sensing on a real mobile manipulator.
Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris
2012-01-01
Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639
Troyer, Melissa; Curley, Lauren B.; Miller, Luke E.; Saygin, Ayse P.; Bergen, Benjamin K.
2014-01-01
Language comprehension requires rapid and flexible access to information stored in long-term memory, likely influenced by activation of rich world knowledge and by brain systems that support the processing of sensorimotor content. We hypothesized that while literal language about biological motion might rely on neurocognitive representations of biological motion specific to the details of the actions described, metaphors rely on more generic representations of motion. In a priming and self-paced reading paradigm, participants saw video clips or images of (a) an intact point-light walker or (b) a scrambled control and read sentences containing literal or metaphoric uses of biological motion verbs either closely or distantly related to the depicted action (walking). We predicted that reading times for literal and metaphorical sentences would show differential sensitivity to the match between the verb and the visual prime. In Experiment 1, we observed interactions between the prime type (walker or scrambled video) and the verb type (close or distant match) for both literal and metaphorical sentences, but with strikingly different patterns. We found no difference in the verb region of literal sentences for Close-Match verbs after walker or scrambled motion primes, but Distant-Match verbs were read more quickly following walker primes. For metaphorical sentences, the results were roughly reversed, with Distant-Match verbs being read more slowly following a walker compared to scrambled motion. In Experiment 2, we observed a similar pattern following still image primes, though critical interactions emerged later in the sentence. We interpret these findings as evidence for shared recruitment of cognitive and neural mechanisms for processing visual and verbal biological motion information. Metaphoric language using biological motion verbs may recruit neurocognitive mechanisms similar to those used in processing literal language but be represented in a less-specific way. PMID:25538604
Cues that Trigger Social Transmission of Disinhibition in Young Children
ERIC Educational Resources Information Center
Moriguchi, Yusuke; Minato, Takashi; Ishiguro, Hiroshi; Shinohara, Ikuko; Itakura, Shoji
2010-01-01
Previous studies have shown that observing a human model's actions, but not a robot's actions, could induce young children's perseverative behaviors and suggested that children's sociocognitive abilities can lead to perseverative errors ("social transmission of disinhibition"). This study investigated how the social transmission of disinhibition…
Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.
Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo
2017-07-01
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action
Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra
2014-01-01
Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Observation-based training for neuroprosthetic control of grasping by amputees.
Agashe, Harshavardhan A; Contreras-Vidal, Jose L
2014-01-01
Current brain-machine interfaces (BMIs) allow upper limb amputees to position robotic arms with a high degree of accuracy, but lack the ability to control hand pre-shaping for grasping different objects. We have previously shown that low frequency (0.1-1 Hz) time domain cortical activity recorded at the scalp via electroencephalography (EEG) encodes information about grasp pre-shaping. To transfer this technology to clinical populations such as amputees, the challenge lies in constructing BMI models in the absence of overt training hand movements. Here we show that it is possible to train BMI models using observed grasping movements performed by a robotic hand attached to amputees' residual limb. Three transradial amputees controlled the grasping motion of an attached robotic hand via their EEG, following the action-observation training phase. Over multiple sessions, subjects successfully grasped the presented object (a bottle or a credit card) in 53±16 % of trials, demonstrating the validity of the BMI models. Importantly, the validation of the BMI model was through closed-loop performance, which demonstrates generalization of the model to unseen data. These results suggest `mirror neuron system' properties captured by delta band EEG that allows neural representation for action observation to be used for action control in an EEG-based BMI system.
Designing collective behavior in a termite-inspired robot construction team.
Werfel, Justin; Petersen, Kirstin; Nagpal, Radhika
2014-02-14
Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals.
Some foundational aspects of quantum computers and quantum robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.; Physics
1998-01-01
This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less
1999-03-06
Student teams behind protective walls operate remote controls to maneuver their robots around the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. The robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Teams played defense by taking away competitors' pillows and generally harassing opposing machines. On the side of the field are the judges, including (far left) Deputy Director for Launch and Payload Processing Loren Shriver and former KSC Director of Shuttle Processing Robert Sieck. A giant screen TV displays the action on the field. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
Sloman, Aaron
2013-06-01
The approach Clark labels "action-oriented predictive processing" treats all cognition as part of a system of on-line control. This ignores other important aspects of animal, human, and robot intelligence. He contrasts it with an alleged "mainstream" approach that also ignores the depth and variety of AI/Robotic research. I don't think the theory presented is worth taking seriously as a complete model, even if there is much that it explains.
2011-11-16
CAPE CANAVERAL, Fla. -- -- NASA Kennedy Space Center Deputy Director Janet Petro addresses pre-calculus, engineering, and physics students at Timber Creek High School in Orlando, Fla., on the future of the center during an education outreach event on Nov. 16 in the school’s Performing Arts Center. Students also had the opportunity to view FIRST Robotics robot in action and learned about Kennedy’s Educate to Innovate (KETI) LEGO Mindstorm activities. Photo credit: NASA/Gianni Woods
Biosleeve Human-Machine Interface
NASA Technical Reports Server (NTRS)
Assad, Christopher (Inventor)
2016-01-01
Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.
Takano, Wataru; Kusajima, Ikuo; Nakamura, Yoshihiko
2016-08-01
It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated. Copyright © 2016 Elsevier Ltd. All rights reserved.
An orbital emulator for pursuit-evasion game theoretic sensor management
NASA Astrophysics Data System (ADS)
Shen, Dan; Wang, Tao; Wang, Gang; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh
2017-05-01
This paper develops and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motion methods. The 3D motion of a satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. For multiple satellites, a fast map-merging algorithm is integrated into the robot operating system (ROS) and simultaneous localization and mapping (SLAM) routines to locate the multiple robots in the scene. The OE is used to demonstrate a pursuit-evasion (PE) game theoretic sensor management algorithm, which models conflicts between a space-based-visible (SBV) satellite (as pursuer) and a geosynchronous (GEO) satellite (as evader). The cost function of the PE game is based on the informational entropy of the SBV-tracking-GEO scenario. GEO can maneuver using a continuous and low thruster. The hard-in-loop space emulator visually illustrates the SSA problem solution based PE game.
Development of coffee maker service robot using speech and face recognition systems using POMDP
NASA Astrophysics Data System (ADS)
Budiharto, Widodo; Meiliana; Santoso Gunawan, Alexander Agung
2016-07-01
There are many development of intelligent service robot in order to interact with user naturally. This purpose can be done by embedding speech and face recognition ability on specific tasks to the robot. In this research, we would like to propose Intelligent Coffee Maker Robot which the speech recognition is based on Indonesian language and powered by statistical dialogue systems. This kind of robot can be used in the office, supermarket or restaurant. In our scenario, robot will recognize user's face and then accept commands from the user to do an action, specifically in making a coffee. Based on our previous work, the accuracy for speech recognition is about 86% and face recognition is about 93% in laboratory experiments. The main problem in here is to know the intention of user about how sweetness of the coffee. The intelligent coffee maker robot should conclude the user intention through conversation under unreliable automatic speech in noisy environment. In this paper, this spoken dialog problem is treated as a partially observable Markov decision process (POMDP). We describe how this formulation establish a promising framework by empirical results. The dialog simulations are presented which demonstrate significant quantitative outcome.
Robots could assist scientists working in Greenland
NASA Astrophysics Data System (ADS)
Showstack, Randy
2011-07-01
GREENLAND—Tom Lane and Suk Joon Lee, recent graduates of Dartmouth University's Thayer School of Engineering, in Hanover, N. H., are standing outside in the frigid cold testing an autonomous robot that could help with scientific research and logistics in harsh polar environments. This summer, Lane, Lee, and others are at Summit Station, a U.S. National Science Foundation (NSF)-sponsored scientific research station in Greenland, fine-tuning a battery-powered Yeti robot as part of a team working on the NSF-funded Cool Robot project. The station, also known as Summit Camp, is located on the highest point of the Greenland Ice Sheet (72°N, 38°W, 3200 meters above sea level) near the middle of the island. It is a proving ground this season for putting the approximately 68-kilogram, 1-cubic-meter robot through its paces, including improving Yeti's mobility capabilities and field-testing the robot. (See the electronic supplement to this Eos issue for a video of Yeti in action (http://www.agu.org/eos_elec/).) During field-testing, plans call for the robot to collect data on elevation and snow surface characteristics, including accumulation. In addition, the robot will collect black carbon and elemental carbon particulate matter air samples around Summit Camp's power generator to help study carbon dispersion over snow.
The Robot in the Crib: A Developmental Analysis of Imitation Skills in Infants and Robots.
Demiris, Yiannis; Meltzoff, Andrew
2008-01-01
Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant's ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions-what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms-how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go 'beyond the stimulus' given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system.
Low-cost educational robotics applied to physics teaching in Brazil
NASA Astrophysics Data System (ADS)
Souza, Marcos A. M.; Duarte, José R. R.
2015-07-01
In this paper, we propose some of the strategies and methodologies for teaching high-school physics topics through an educational robotics show. This exhibition was part of a set of actions promoted by a Brazilian government program of incentive for teaching activities, whose primary focus is the training of teachers, the improvement of teaching in public schools, the dissemination of science, and the formation of new scientists and researchers. By means of workshops, banners and the prototyping of robotics, we were able to create a connection between the study areas and their surroundings, making learning meaningful and accessible for the students involved and contributing to their cognitive development.
Modelling of cooperating robotized systems with the use of object-based approach
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.
2015-11-01
Today's robotized manufacturing systems are characterized by high efficiency. The emphasis is placed mainly on the simultaneous work of machines. It could manifest in many ways, where the most spectacular one is the cooperation of several robots, during work on the same detail. What's more, recently a dual-arm robots are used that could mimic the manipulative skills of human hands. As a result, it is often hard to deal with the situation, when it is necessary not only to maintain sufficient precision, but also the coordination and proper sequence of movements of individual robots’ arms. The successful completion of this task depends on the individual robot control systems and their respective programmed, but also on the well-functioning communication between robot controllers. A major problem in case of cooperating robots is the possibility of collision between particular links of robots’ kinematic chains. This is not a simple case, because the manufacturers of robotic systems do not disclose the details of the control algorithms, then it is hard to determine such situation. Another problem with cooperation of robots is how to inform the other units about start or completion of part of the task, so that other robots can take further actions. This paper focuses on communication between cooperating robotic units, assuming that every robot is represented by object-based model. This problem requires developing a form of communication protocol that the objects can use for collecting the information about its environment. The approach presented in the paper is not limited to the robots and could be used in a wider range, for example during modelling of the complete workcell or production line.
Probabilistic self-localisation on a qualitative map based on occlusions
NASA Astrophysics Data System (ADS)
Santos, Paulo E.; Martins, Murilo F.; Fenelon, Valquiria; Cozman, Fabio G.; Dee, Hannah M.
2016-09-01
Spatial knowledge plays an essential role in human reasoning, permitting tasks such as locating objects in the world (including oneself), reasoning about everyday actions and describing perceptual information. This is also the case in the field of mobile robotics, where one of the most basic (and essential) tasks is the autonomous determination of the pose of a robot with respect to a map, given its perception of the environment. This is the problem of robot self-localisation (or simply the localisation problem). This paper presents a probabilistic algorithm for robot self-localisation that is based on a topological map constructed from the observation of spatial occlusion. Distinct locations on the map are defined by means of a classical formalism for qualitative spatial reasoning, whose base definitions are closer to the human categorisation of space than traditional, numerical, localisation procedures. The approach herein proposed was systematically evaluated through experiments using a mobile robot equipped with a RGB-D sensor. The results obtained show that the localisation algorithm is successful in locating the robot in qualitatively distinct regions.
High level language-based robotic control system
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)
1994-01-01
This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.
High level language-based robotic control system
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)
1996-01-01
This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.
High level intelligent control of telerobotics systems
NASA Technical Reports Server (NTRS)
Mckee, James
1988-01-01
A high level robot command language is proposed for the autonomous mode of an advanced telerobotics system and a predictive display mechanism for the teleoperational model. It is believed that any such system will involve some mixture of these two modes, since, although artificial intelligence can facilitate significant autonomy, a system that can resort to teleoperation will always have the advantage. The high level command language will allow humans to give the robot instructions in a very natural manner. The robot will then analyze these instructions to infer meaning so that is can translate the task into lower level executable primitives. If, however, the robot is unable to perform the task autonomously, it will switch to the teleoperational mode. The time delay between control movement and actual robot movement has always been a problem in teleoperations. The remote operator may not actually see (via a monitor) the results of high actions for several seconds. A computer generated predictive display system is proposed whereby the operator can see a real-time model of the robot's environment and the delayed video picture on the monitor at the same time.
Developing a multidisciplinary robotic surgery quality assessment program.
Gonsenhauser, Iahn; Abaza, Ronney; Mekhjian, Hagop; Moffatt-Bruce, Susan D
2012-01-01
The objective of this study was to test the feasibility of a novel quality-improvement (QI) program designed to incorporate multiple robotic surgical sub-specialties in one health care system. A robotic surgery quality assessment program was developed by The Ohio State University College of Medicine (OSUMC) in conjunction with The Ohio State University Medical Center Quality Improvement and Operations Department. A retrospective review of cases was performed using data interrogated from the OSUMC Information Warehouse from January 2007 through August 2009. Robotic surgery cases (n=2200) were assessed for operative times, length of stay (LOS), conversions, returns to surgery, readmissions and cancellations as potential quality indicators. An actionable and reproducible framework for the quality measurement and assessment of a multidisciplinary and interdepartmental robotic surgery program was successfully completed demonstrating areas for improvement opportunities. This report supports that standard quality indicators can be applied to multiple specialties within a health care system to develop a useful quality tracking and assessment tool in the highly specialized area of robotic surgery. © 2012 National Association for Healthcare Quality.
2011-11-16
CAPE CANAVERAL, Fla. -- -- NASA Kennedy Space Center Deputy Director Janet Petro addresses pre-calculus, engineering and physics students at Timber Creek High School, in Orlando, Fla., on work being done at the center during an education outreach event on Nov. 16. Students also had the opportunity to view a FIRST Robotics robot in action, and learn about Kennedy’s Educate to Innovate (KETI) LEGO Mindstorm activities in the school’s Performing Arts Center. Photo credit: NASA/Gianni Woods
2011-11-16
CAPE CANAVERAL, Fla. -- -- NASA Kennedy Space Center Deputy Director Janet Petro addresses pre-calculus, engineering and physics students at Timber Creek High School, in Orlando, Fla., on work being done at the center during an education outreach event on Nov. 16. Students also had the opportunity to view a FIRST Robotics robot in action, and learn about Kennedy’s Educate to Innovate (KETI) LEGO Mindstorm activities in the school’s Performing Arts Center. Photo credit: NASA/Gianni Woods
2011-11-16
CAPE CANAVERAL, Fla. -- -- Pre-calculus, engineering, and physics students at Timber Creek High School in Orlando, Fla., listen to NASA Kennedy Space Center Deputy Director Janet Petro speak on work being done at the center during an education outreach event on Nov. 16 in the school’s Performing Arts Center. Students also had the opportunity to view a FIRST Robotics robot in action and learn about Kennedy’s Educate to Innovate (KETI) LEGO Mindstorm activities. Photo credit: NASA/Gianni Woods
2010-03-01
and charac- terize the actions taken by the soldier (e.g., running, walking, climbing stairs ). Real-time image capture and exchange N The ability of...multimedia information sharing among soldiers in the field, two-way speech translation systems, and autonomous robotic platforms. Key words: Emerging...soldiers in the field, two-way speech translation systems, and autonomous robotic platforms. It has been the foundation for 10 technology evaluations
Coordinating sensing and local navigation
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1991-01-01
Based on Navigation Templates (or NaTs), this work presents a new paradigm for local navigation which addresses the noisy and uncertain nature of sensor data. Rather than creating a new navigation plan each time the robot's perception of the world changes, the technique incorporates perceptual changes directly into the existing navigation plan. In this way, the robot's navigation plan is quickly and continuously modified, resulting in actions that remain coordinated with its changing perception of the world.
Robotic Movement Elicits Visuomotor Priming in Children with Autism
ERIC Educational Resources Information Center
Pierno, Andrea C.; Mari, Morena; Lusher, Dean; Castiello, Umberto
2008-01-01
The ability to understand another person's action and, if needed, to imitate that action, is a core component of human social behaviour. Imitation skills have attracted particular attention in the search for the underlying causes of the social difficulties that characterize autism. In recent years, it has been reported that people with autism can…
Design of a Micro Cable Tunnel Inspection Robot
NASA Astrophysics Data System (ADS)
Song, Wei; Liu, Lei; Zhou, Xiaolong; Wang, Chengjiang
2016-11-01
As the ventilation system in cable tunnel is not perfect and the environment is closed, it is easy to accumulate toxic and harmful gas. It is a serious threat to the life safety of inspection staff. Therefore, a micro cable tunnel inspection robot is designed. The whole design plan mainly includes two parts: mechanical structure design and control system design. According to the functional requirements of the tunnel inspection robot, a wheel arm structure with crawler type is proposed. Some sensors are used to collect temperature, gas and image and transmit the information to the host computer in real time. The result shows the robot with crawler wheel arm structure has the advantages of small volume, quick action and high performance-price ratio. Besides, it has high obstacle crossing and avoidance ability and can adapt to a variety of complex cable tunnel environment.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Mobile robot navigation modulated by artificial emotions.
Lee-Johnson, C P; Carnegie, D A
2010-04-01
For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.
Khan, Adeel S; Siddiqui, Imran; Vrochides, Dionisios; Martinie, John B
2018-01-01
Lateral pancreaticojejunostomy (LPJ), also known as the Puestow procedure, is a complex surgical procedure reserved for patients with refractory chronic pancreatitis (CP) and a dilated pancreatic duct. Traditionally, this operation is performed through an open incision, however, recent advancements in minimally invasive techniques have made it possible to perform the surgery using laparoscopic and robotic techniques with comparable safety. Though we do not have enough data yet to prove superiority of one over the other, the robotic approach appears to have an advantage over the laparoscopic technique in better visualization through 3-dimensional (3D) imaging and availability of wristed instruments for more precise actions, which may translate into superior outcomes. This paper is a description of our technique for robotic LPJ in patients with refractory CP. Important principles of patient selection, preoperative workup, surgical technique and post-operative management are discussed. A short video with a case presentation and highlights of the important steps of the surgery is included.
Khan, Adeel S.; Siddiqui, Imran; Vrochides, Dionisios
2018-01-01
Lateral pancreaticojejunostomy (LPJ), also known as the Puestow procedure, is a complex surgical procedure reserved for patients with refractory chronic pancreatitis (CP) and a dilated pancreatic duct. Traditionally, this operation is performed through an open incision, however, recent advancements in minimally invasive techniques have made it possible to perform the surgery using laparoscopic and robotic techniques with comparable safety. Though we do not have enough data yet to prove superiority of one over the other, the robotic approach appears to have an advantage over the laparoscopic technique in better visualization through 3-dimensional (3D) imaging and availability of wristed instruments for more precise actions, which may translate into superior outcomes. This paper is a description of our technique for robotic LPJ in patients with refractory CP. Important principles of patient selection, preoperative workup, surgical technique and post-operative management are discussed. A short video with a case presentation and highlights of the important steps of the surgery is included. PMID:29780718
Supervisory autonomous local-remote control system design: Near-term and far-term applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul
1993-01-01
The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.
Controllability of Complex Dynamic Objects
NASA Astrophysics Data System (ADS)
Kalach, G. G.; Kazachek, N. A.; Morozov, A. A.
2017-01-01
Quality requirements for mobile robots intended for both specialized and everyday use are increasing in step with the complexity of the technological tasks assigned to the robots. Whether a mobile robot is for ground, aerial, or underwater use, the relevant quality characteristics can be summarized under the common concept of agility. This term denotes the object’s (the robot)’s ability to react quickly to control actions (change speed and direction), turn in a limited area, etc. When using this approach in integrated assessment of the quality characteristics of an object with the control system, it seems more constructive to use the term “degree of control”. This paper assesses the degree of control by an example of a mobile robot with the variable-geometry drive wheel axle. We show changes in the degree of control depending on the robot’s configuration, and results illustrated by calculation data, computer and practical experiments. We describe the prospects of using intelligent technology for efficient control of objects with a high degree of controllability.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
During the 1999 FIRST Southeastern Regional robotic competition held at KSC, a robot carrying its cache of pillow-like disks maneuvers to move around another at left. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spend two minutes each trying to grab, claw and hoist the pillows onto their machines. Teams play defense by taking away competitors' pillows and generally harassing opposing machines. Behind the field are a group of judges, including KSC former KSC Director of Shuttle Processing Robert Sieck (left, in cap), and Center Director Roy Bridges (in white shirt). A giant screen TV in the background displays the action on the playing field. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.
Reach and grasp by people with tetraplegia using a neurally controlled robotic arm
Hochberg, Leigh R.; Bacher, Daniel; Jarosiewicz, Beata; Masse, Nicolas Y.; Simeral, John D.; Vogel, Joern; Haddadin, Sami; Liu, Jie; Cash, Sydney S.; van der Smagt, Patrick; Donoghue, John P.
2012-01-01
Paralysis following spinal cord injury (SCI), brainstem stroke, amyotrophic lateral sclerosis (ALS) and other disorders can disconnect the brain from the body, eliminating the ability to carry out volitional movements. A neural interface system (NIS)1–5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with longstanding tetraplegia can use an NIS to move and click a computer cursor and to control physical devices6–8. Able-bodied monkeys have used an NIS to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here, we demonstrate the ability of two people with long-standing tetraplegia to use NIS-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. While robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after CNS injury, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals. PMID:22596161
NASA Astrophysics Data System (ADS)
Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo
2002-04-01
Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.
Abdel Raheem, Ali; Kim, Dae Keun; Santok, Glen Denmer; Alabdulaali, Ibrahim; Chung, Byung Ha; Choi, Young Deuk; Rha, Koon Ho
2016-09-01
To report the 5-year oncological outcomes of robot-assisted radical prostatectomy from the largest series ever reported from Asia. A retrospective analysis of 800 Asian patients who were treated with robot-assisted radical prostatectomy from July 2005 to May 2010 in the Department of Urology and Urological Science Institute, Yonsei University College of Medicine, Seoul, Korea was carried out. The primary end-point was to evaluate the biochemical recurrence. The secondary end-point was to show the biochemical recurrence-free survival, metastasis-free survival and cancer-specific survival. A total of 197 (24.65%), 218 (27.3%), and 385 (48.1%) patients were classified as low-, intermediate- and high-risk patients according to the D'Amico risk stratification risk criteria, respectively. The median follow-up period was 64 months (interquartile range 28-71 months). The overall incidence of positive surgical margin was 36.6%. There was biochemical recurrence in 183 patients (22.9%), 38 patients (4.8%) developed distant metastasis and 24 patients (3%) died from prostate cancer. Actuarial biochemical recurrence-free survival, metastasis-free survival, and cancer-specific survival rates at 5 years were 76.4%, 94.6% and 96.7%, respectively. Positive lymph node was associated with lower 5-year biochemical recurrence-free survival (9.1%), cancer-specific survival (75.7%) and metastasis-free survival (61.9%) rates (P < 0.001). On multivariable analysis, among all the predictors, positive lymph node was the strongest predictor of biochemical recurrence, cancer-specific survival and metastasis-free survival (P < 0.001). Herein we report the largest robot-assisted radical prostatectomy series from Asia. Robot-assisted radical prostatectomy is confirmed to be an oncologically safe procedure that is able to provide effective 5-year cancer control, even in patients with high-risk disease. © 2016 The Japanese Urological Association.
Collective search by mobile robots using alpha-beta coordination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, S.Y.; Robinett, R. III
1998-04-01
One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory withoutmore » supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.« less
Active Prior Tactile Knowledge Transfer for Learning Tactual Properties of New Objects
Feng, Di
2018-01-01
Reusing the tactile knowledge of some previously-explored objects (prior objects) helps us to easily recognize the tactual properties of new objects. In this paper, we enable a robotic arm equipped with multi-modal artificial skin, like humans, to actively transfer the prior tactile exploratory action experiences when it learns the detailed physical properties of new objects. These experiences, or prior tactile knowledge, are built by the feature observations that the robot perceives from multiple sensory modalities, when it applies the pressing, sliding, and static contact movements on objects with different action parameters. We call our method Active Prior Tactile Knowledge Transfer (APTKT), and systematically evaluated its performance by several experiments. Results show that the robot improved the discrimination accuracy by around 10% when it used only one training sample with the feature observations of prior objects. By further incorporating the predictions from the observation models of prior objects as auxiliary features, our method improved the discrimination accuracy by over 20%. The results also show that the proposed method is robust against transferring irrelevant prior tactile knowledge (negative knowledge transfer). PMID:29466300
Improving Robot Motor Learning with Negatively Valenced Reinforcement Signals
Navarro-Guerrero, Nicolás; Lowe, Robert J.; Wermter, Stefan
2017-01-01
Both nociception and punishment signals have been used in robotics. However, the potential for using these negatively valenced types of reinforcement learning signals for robot learning has not been exploited in detail yet. Nociceptive signals are primarily used as triggers of preprogrammed action sequences. Punishment signals are typically disembodied, i.e., with no or little relation to the agent-intrinsic limitations, and they are often used to impose behavioral constraints. Here, we provide an alternative approach for nociceptive signals as drivers of learning rather than simple triggers of preprogrammed behavior. Explicitly, we use nociception to expand the state space while we use punishment as a negative reinforcement learning signal. We compare the performance—in terms of task error, the amount of perceived nociception, and length of learned action sequences—of different neural networks imbued with punishment-based reinforcement signals for inverse kinematic learning. We contrast the performance of a version of the neural network that receives nociceptive inputs to that without such a process. Furthermore, we provide evidence that nociception can improve learning—making the algorithm more robust against network initializations—as well as behavioral performance by reducing the task error, perceived nociception, and length of learned action sequences. Moreover, we provide evidence that punishment, at least as typically used within reinforcement learning applications, may be detrimental in all relevant metrics. PMID:28420976
The Modular Design and Production of an Intelligent Robot Based on a Closed-Loop Control Strategy.
Zhang, Libo; Zhu, Junjie; Ren, Hao; Liu, Dongdong; Meng, Dan; Wu, Yanjun; Luo, Tiejian
2017-10-14
Intelligent robots are part of a new generation of robots that are able to sense the surrounding environment, plan their own actions and eventually reach their targets. In recent years, reliance upon robots in both daily life and industry has increased. The protocol proposed in this paper describes the design and production of a handling robot with an intelligent search algorithm and an autonomous identification function. First, the various working modules are mechanically assembled to complete the construction of the work platform and the installation of the robotic manipulator. Then, we design a closed-loop control system and a four-quadrant motor control strategy, with the aid of debugging software, as well as set steering gear identity (ID), baud rate and other working parameters to ensure that the robot achieves the desired dynamic performance and low energy consumption. Next, we debug the sensor to achieve multi-sensor fusion to accurately acquire environmental information. Finally, we implement the relevant algorithm, which can recognize the success of the robot's function for a given application. The advantage of this approach is its reliability and flexibility, as the users can develop a variety of hardware construction programs and utilize the comprehensive debugger to implement an intelligent control strategy. This allows users to set personalized requirements based on their needs with high efficiency and robustness.
Brokaw, Elizabeth B; Nichols, Diane; Holley, Rahsaan J; Lum, Peter S
2014-05-01
Individuals with chronic stroke often have long-lasting upper extremity impairments that impede function during activities of daily living. Rehabilitation robotics have shown promise in improving arm function, but current systems do not allow realistic training of activities of daily living. We have incorporated the ARMin III and HandSOME device into a novel robotic therapy modality that provides functional training of reach and grasp tasks. To compare the effects of equal doses of robotic and conventional therapy in individuals with chronic stroke. Subjects were randomized to 12 hours of robotic or conventional therapy and then crossed over to the other therapy type after a 1-month washout period. Twelve moderate to severely impaired individuals with chronic stroke were enrolled, and 10 completed the study. Across the 3-month study period, subjects showed significant improvements in the Fugl-Meyer (P = .013) and Box and Blocks tests (P = .028). The robotic intervention produced significantly greater improvements in the Action Research Arm Test than conventional therapy (P = .033). Gains in the Box and Blocks test from conventional therapy were larger than from robotic therapy in subjects who received conventional therapy after robotic therapy (P = .044). Data suggest that robotic therapy can elicit improvements in arm function that are distinct from conventional therapy and supplements conventional methods to improve outcomes. Results from this pilot study should be confirmed in a larger study.
"Spooky actions at a distance": physics, psi, and distant healing.
Leder, Drew
2005-10-01
Over decades, consciousness research has accumulated evidence of the real and measureable existence of "spooky actions at a distance"--modes of telepathy, telekinesis, clairvoyance, and the like. More recently scientists have begun rigorous study of the effects of distant healing intention and prayer vis-a-vis nonhuman living systems and patients in clinical trials. A barrier to taking such work seriously may be the belief that it is fundamentally incompatible with the scientific world view. This article suggests that it need not be; contemporary physics has generated a series of paradigms that can be used to make sense of, interpret, and explore "psi" and distant healing. Four such models are discussed, two drawn from relativity theory and two from quantum mechanics. First is the energetic transmission model, presuming the effects of conscious intention to be mediated by an as-yet unknown energy signal. Second is the model of path facilitation. As gravity, according to general relativity, "warps" space-time, easing certain pathways of movement, so may acts of consciousness have warping and facilitating effects on the fabric of the surrounding world. Third is the model of nonlocal entanglement drawn from quantum mechanics. Perhaps people, like particles, can become entangled so they behave as one system with instantaneous and unmediated correlations across a distance. Last discussed is a model involving actualization of potentials. The act of measurement in quantum mechanics collapses a probabilistic wave function into a single outcome. Perhaps conscious healing intention can act similarly, helping to actualize one of a series of possibilities; for example, recovery from a potentially lethal tumor. Such physics-based models are not presented as explanatory but rather as suggestive. Disjunctions as well as compatibilities between the phenomena of modern physics and those of psi and distant healing are explored.
The Summer Robotic Autonomy Course
NASA Technical Reports Server (NTRS)
Nourbakhsh, Illah R.
2002-01-01
We offered a first Robotic Autonomy course this summer, located at NASA/Ames' new NASA Research Park, for approximately 30 high school students. In this 7-week course, students worked in ten teams to build then program advanced autonomous robots capable of visual processing and high-speed wireless communication. The course made use of challenge-based curricula, culminating each week with a Wednesday Challenge Day and a Friday Exhibition and Contest Day. Robotic Autonomy provided a comprehensive grounding in elementary robotics, including basic electronics, electronics evaluation, microprocessor programming, real-time control, and robot mechanics and kinematics. Our course then continued the educational process by introducing higher-level perception, action and autonomy topics, including teleoperation, visual servoing, intelligent scheduling and planning and cooperative problem-solving. We were able to deliver such a comprehensive, high-level education in robotic autonomy for two reasons. First, the content resulted from close collaboration between the CMU Robotics Institute and researchers in the Information Sciences and Technology Directorate and various education program/project managers at NASA/Ames. This collaboration produced not only educational content, but will also be focal to the conduct of formative and summative evaluations of the course for further refinement. Second, CMU rapid prototyping skills as well as the PI's low-overhead perception and locomotion research projects enabled design and delivery of affordable robot kits with unprecedented sensory- locomotory capability. Each Trikebot robot was capable of both indoor locomotion and high-speed outdoor motion and was equipped with a high-speed vision system coupled to a low-cost pan/tilt head. As planned, follow the completion of Robotic Autonomy, each student took home an autonomous, competent robot. This robot is the student's to keep, as she explores robotics with an extremely capable tool in the midst of a new community for roboticists. CMU provided undergraduate course credit for this official course, 16-162U, for 13 students, with all other students receiving course credit from National Hispanic University.
Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1997-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Robot-assisted lobectomy for non-small cell lung cancer in china: initial experience and techniques.
Zhao, Xiaojing; Qian, Liqiang; Lin, Hao; Tan, Qiang; Luo, Qingquan
2010-03-01
To summarize our initial experience in robot-assisted thoracoscopic lobectomy. Methods Five patients underwent lobectomy using da Vinci S HD Surgical System (Intuitive Surgical, Sunnyvale, California). During the operation, we respectively made four ports over chest wall for positioning robotic endoscope, left and right robotic arms and auxiliary instruments without retracting ribs. The procedure followed sequential anatomy as complete video-assisted thoracoscopic surgery lobectomy did, and lymph node dissection followed international standard. All patients successfully underwent complete robot-assisted thoracoscopic lobectomy. Neither additional incisions nor emergent conversion to a thoracotomy happened. Frozen dissection during lobectomy showed non-small-cell lung cancer in four patients, who afterwards underwent systemic lymph node dissection, while the case left was with tuberculoma and didn't undergo lymph node dissection. Recurrent air leak occurred in one case, so chest tube was kept for drainage, and one week later, the patient was extubated due to improvement. All other patients recovered well postoperatively without obvious postoperative complications. Robot-assisted thoracoscopic surgery is feasible with good operability, clear visual field, reliable action and its supriority of trouble free; exquisite operative skills are required to ensure a stable and safe operation; robot-assisted surgery is efficiency and patients recover well postoperatively.
Social Transmission of Experience of Agency: An Experimental Study.
Khalighinejad, Nima; Bahrami, Bahador; Caspar, Emilie A; Haggard, Patrick
2016-01-01
The sense of controlling one's own actions is fundamental to normal human mental function, and also underlies concepts of social responsibility for action. However, it remains unclear how the wider social context of human action influences sense of agency. Using a simple experimental design, we investigated, for the first time, how observing the action of another person or a robot could potentially influence one's own sense of agency. We assessed how observing another's action might change the perceived temporal relationship between one's own voluntary actions and their outcomes, which has been proposed as an implicit measure of sense of agency. Working in pairs, participants chose between two action alternatives, one rewarded more frequently than the other, while watching a rotating clock hand. They judged, in separate blocks, either the time of their own action, or the time of a tone that followed the action. These were compared to baseline judgements of actions alone, or tones alone, to calculate the perceptual shift of action toward outcome and vice versa. Our design focused on how these two dependent variables, which jointly provide an implicit measure of sense of agency, might be influenced by observing another's action. In the observational group, each participant could see the other's actions. Multivariate analysis showed that the perceived time of action and tone shifted progressively toward the actual time of outcome with repeated experience of this social situation. No such progressive change occurred in other groups for whom a barrier hid participants' actions from each other. However, a similar effect was observed in the group that viewed movements of a human-like robotic hand, rather than actions of another person. This finding suggests that observing the actions of others increases the salience of the external outcomes of action and this effect is not unique to observing human agents. Social contexts in which we see others controlling external events may play an important role in mentally representing the impact of our own actions on the external world.
Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia.
Andreasen Struijk, Lotte N S; Egsgaard, Line Lindhardt; Lontis, Romulus; Gaihede, Michael; Bentsen, Bo
2017-11-06
For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.
Robot Behavior Acquisition Superposition and Composting of Behaviors Learned through Teleoperation
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2004-01-01
Superposition of a small set of behaviors, learned via teleoperation, can lead to robust completion of a simple articulated reach-and-grasp task. Results support the hypothesis that a set of learned behaviors can be combined to generate new behaviors of a similar type. This supports the hypothesis that a robot can learn to interact purposefully with its environment through a developmental acquisition of sensory-motor coordination. Teleoperation bootstraps the process by enabling the robot to observe its own sensory responses to actions that lead to specific outcomes. A reach-and-grasp task, learned by an articulated robot through a small number of teleoperated trials, can be performed autonomously with success in the face of significant variations in the environment and perturbations of the goal. Superpositioning was performed using the Verbs and Adverbs algorithm that was developed originally for the graphical animation of articulated characters. Work was performed on Robonaut at NASA-JSC.
Empowering Older Patients to Engage in Self Care: Designing an Interactive Robotic Device
Tiwari, Priyadarshi; Warren, Jim; Day, Karen
2011-01-01
Objectives: To develop and test an interactive robot mounted computing device to support medication management as an example of a complex self-care task in older adults. Method: A Grounded Theory (GT), Participatory Design (PD) approach was used within three Action Research (AR) cycles to understand design requirements and test the design configuration addressing the unique task requirements. Results: At the end of the first cycle a conceptual framework was evolved. The second cycle informed architecture and interface design. By the end of third cycle residents successfully interacted with the dialogue system and were generally satisfied with the robot. The results informed further refinement of the prototype. Conclusion: An interactive, touch screen based, robot-mounted information tool can be developed to support healthcare needs of older people. Qualitative methods such as the hybrid GT-PD-AR approach may be particularly helpful for innovating and articulating design requirements in challenging situations. PMID:22195203
Empowering older patients to engage in self care: designing an interactive robotic device.
Tiwari, Priyadarshi; Warren, Jim; Day, Karen
2011-01-01
To develop and test an interactive robot mounted computing device to support medication management as an example of a complex self-care task in older adults. A Grounded Theory (GT), Participatory Design (PD) approach was used within three Action Research (AR) cycles to understand design requirements and test the design configuration addressing the unique task requirements. At the end of the first cycle a conceptual framework was evolved. The second cycle informed architecture and interface design. By the end of third cycle residents successfully interacted with the dialogue system and were generally satisfied with the robot. The results informed further refinement of the prototype. An interactive, touch screen based, robot-mounted information tool can be developed to support healthcare needs of older people. Qualitative methods such as the hybrid GT-PD-AR approach may be particularly helpful for innovating and articulating design requirements in challenging situations.
Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation.
Fung, Wai-keung; Liu, Yun-hui
2003-12-01
Adaptive Resonance Theory (ART) networks are employed in robot behavior learning. Two of the difficulties in online robot behavior learning, namely, (1) exponential memory increases with time, (2) difficulty for operators to specify learning tasks accuracy and control learning attention before learning. In order to remedy the aforementioned difficulties, an adaptive categorization mechanism is introduced in ART networks for perceptual and action patterns categorization in this paper. A game-theoretic formulation of adaptive categorization for ART networks is proposed for vigilance parameter adaptation for category size control on the categories formed. The proposed vigilance parameter update rule can help improving categorization performance in the aspect of category number stability and solve the problem of selecting initial vigilance parameter prior to pattern categorization in traditional ART networks. Behavior learning using physical robot is conducted to demonstrate the effectiveness of the proposed adaptive categorization mechanism in ART networks.
Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi
2014-03-27
This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile.
Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi
2014-01-01
This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile. PMID:24681671
NASA Astrophysics Data System (ADS)
Sakai, Naoki; Kawabe, Naoto; Hara, Masayuki; Toyoda, Nozomi; Yabuta, Tetsuro
This paper argues how a compact humanoid robot can acquire a giant-swing motion without any robotic models by using Q-Learning method. Generally, it is widely said that Q-Learning is not appropriated for learning dynamic motions because Markov property is not necessarily guaranteed during the dynamic task. However, we tried to solve this problem by embedding the angular velocity state into state definition and averaging Q-Learning method to reduce dynamic effects, although there remain non-Markov effects in the learning results. The result shows how the robot can acquire a giant-swing motion by using Q-Learning algorithm. The successful acquired motions are analyzed in the view point of dynamics in order to realize a functionally giant-swing motion. Finally, the result shows how this method can avoid the stagnant action loop at around the bottom of the horizontal bar during the early stage of giant-swing motion.
1999-03-06
During the 1999 FIRST Southeastern Regional robotic competition held at KSC, a robot carrying its cache of pillow-like disks maneuvers to move around another at left. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spend two minutes each trying to grab, claw and hoist the pillows onto their machines. Teams play defense by taking away competitors' pillows and generally harassing opposing machines. Behind the field are a group of judges, including KSC former KSC Director of Shuttle Processing Robert Sieck (left, in cap), and Center Director Roy Bridges (in white shirt). A giant screen TV in the background displays the action on the playing field. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
Object schemas for grounding language in a responsive robot
NASA Astrophysics Data System (ADS)
Hsiao, Kai-Yuh; Tellex, Stefanie; Vosoughi, Soroush; Kubat, Rony; Roy, Deb
2008-12-01
An approach is introduced for physically grounded natural language interpretation by robots that reacts appropriately to unanticipated physical changes in the environment and dynamically assimilates new information pertinent to ongoing tasks. At the core of the approach is a model of object schemas that enables a robot to encode beliefs about physical objects in its environment using collections of coupled processes responsible for sensorimotor interaction. These interaction processes run concurrently in order to ensure responsiveness to the environment, while co-ordinating sensorimotor expectations, action planning and language use. The model has been implemented on a robot that manipulates objects on a tabletop in response to verbal input. The implementation responds to verbal requests such as 'Group the green block and the red apple', while adapting in real time to unexpected physical collisions and taking opportunistic advantage of any new information it may receive through perceptual and linguistic channels.
Brain computer interface for operating a robot
NASA Astrophysics Data System (ADS)
Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed
2013-10-01
A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.
Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots
NASA Technical Reports Server (NTRS)
Chen, Vincent Wei-Kang
1992-01-01
Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.
NASA Astrophysics Data System (ADS)
Leahy, M. B., Jr.; Cassiday, B. K.
1993-02-01
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
NASA Astrophysics Data System (ADS)
Leahy, Michael B., Jr.; Cassiday, Brian K.
1992-11-01
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. An organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. The small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALCs will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
NASA Technical Reports Server (NTRS)
Leahy, M. B., Jr.; Cassiday, B. K.
1993-01-01
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
Robotics, Ethics, and Nanotechnology
NASA Astrophysics Data System (ADS)
Ganascia, Jean-Gabriel
It may seem out of character to find a chapter on robotics in a book about nanotechnology, and even more so a chapter on the application of ethics to robots. Indeed, as we shall see, the questions look quite different in these two fields, i.e., in robotics and nanoscience. In short, in the case of robots, we are dealing with artificial beings endowed with higher cognitive faculties, such as language, reasoning, action, and perception, whereas in the case of nano-objects, we are talking about invisible macromolecules which act, move, and duplicate unseen to us. In one case, we find ourselves confronted by a possibly evil double of ourselves, and in the other, a creeping and intangible nebula assails us from all sides. In one case, we are faced with an alter ego which, although unknown, is clearly perceptible, while in the other, an unspeakable ooze, the notorious grey goo, whose properties are both mysterious and sinister, enters and immerses us. This leads to a shift in the ethical problem situation: the notion of responsibility can no longer be worded in the same terms because, despite its otherness, the robot can always be located somewhere, while in the case of nanotechnologies, myriad nanometric objects permeate everywhere, disseminating uncontrollably.
Exploiting map plans as resources for action
NASA Technical Reports Server (NTRS)
Payton, David
1989-01-01
When plans are used as programs for controlling the action of autonomous or teleoperated robots, their abstract representation can easily obscure a great deal of the critical knowledge that originally led to the planned course of action. An autonomous vehicle experiment is highlighted which illustrates how the information barriers created by abstraction can result in undesirable action. It is then shown how the same task can be performed correctly using plans as a resource for action. As a result of this simple change in outlook, problems requiring opportunistic reaction to unexpected changes in the environment can be solved.
Using neural networks and Dyna algorithm for integrated planning, reacting and learning in systems
NASA Technical Reports Server (NTRS)
Lima, Pedro; Beard, Randal
1992-01-01
The traditional AI answer to the decision making problem for a robot is planning. However, planning is usually CPU-time consuming, depending on the availability and accuracy of a world model. The Dyna system generally described in earlier work, uses trial and error to learn a world model which is simultaneously used to plan reactions resulting in optimal action sequences. It is an attempt to integrate planning, reactive, and learning systems. The architecture of Dyna is presented. The different blocks are described. There are three main components of the system. The first is the world model used by the robot for internal world representation. The input of the world model is the current state and the action taken in the current state. The output is the corresponding reward and resulting state. The second module in the system is the policy. The policy observes the current state and outputs the action to be executed by the robot. At the beginning of program execution, the policy is stochastic and through learning progressively becomes deterministic. The policy decides upon an action according to the output of an evaluation function, which is the third module of the system. The evaluation function takes the following as input: the current state of the system, the action taken in that state, the resulting state, and a reward generated by the world which is proportional to the current distance from the goal state. Originally, the work proposed was as follows: (1) to implement a simple 2-D world where a 'robot' is navigating around obstacles, to learn the path to a goal, by using lookup tables; (2) to substitute the world model and Q estimate function Q by neural networks; and (3) to apply the algorithm to a more complex world where the use of a neural network would be fully justified. In this paper, the system design and achieved results will be described. First we implement the world model with a neural network and leave Q implemented as a look up table. Next, we use a lookup table for the world model and implement the Q function with a neural net. Time limitations prevented the combination of these two approaches. The final section discusses the results and gives clues for future work.
Telerobotic control of a mobile coordinated robotic server, executive summary
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
This interim report continues with the research effort on advanced adaptive controls for space robotics systems. In particular, previous results developed by the principle investigator and his research team centered around fuzzy logic control (FLC) in which the lack of knowledge of the robotic system as well as the uncertainties of the environment are compensated for by a rule base structure which interacts with varying degrees of belief of control action using system measurements. An on-line adaptive algorithm was developed using a single parameter tuning scheme. In the effort presented, the methodology is further developed to include on-line scaling factor tuning and self-learning control as well as extended to the multi-input, multi-output (MIMO) case. Classical fuzzy logic control requires tuning input scale factors off-line through trial and error techniques. This is time-consuming and cannot adapt to new changes in the process. The new adaptive FLC includes a self-tuning scheme for choosing the scaling factors on-line. Further the rule base in classical FLC is usually produced by soliciting knowledge from human operators as to what is good control action for given circumstances. This usually requires full knowledge and experience of the process and operating conditions, which limits applicability. A self-learning scheme is developed which adaptively forms the rule base with very limited knowledge of the process. Finally, a MIMO method is presented employing optimization techniques. This is required for application to space robotics in which several degrees-of-freedom links are commonly used. Simulation examples are presented for terminal control - typical of robotic problems in which a desired terminal point is to be reached for each link. Future activities will be to implement the MIMO adaptive FLC on an INTEL microcontroller-based circuit and to test the algorithm on a robotic system at the Mars Mission Research Center at North Carolina State University.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.
1971-01-01
Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.
Wei, Xi-Jun; Tong, Kai-Yu; Hu, Xiao-Ling
2011-12-01
Responsiveness of clinical assessments is an important element in the report of clinical effectiveness after rehabilitation. The correlation could reflect the validity of assessments as an indication of clinical performance before and after interventions. This study investigated the correlation and responsiveness of Fugl-Meyer Assessment (FMA), Motor Status Scale (MSS), Action Research Arm Test (ARAT) and the Modified Ashworth Scale (MAS), which are used frequently in effectiveness studies of robotic upper-extremity training in stroke rehabilitation. Twenty-seven chronic stroke patients were recruited for a 20-session upper-extremity rehabilitation robotic training program. This was a rater-blinded randomized controlled trial. All participants were evaluated with FMA, MSS, ARAT, MAS, and Functional Independent Measure before and after robotic training. Spearman's rank correlation coefficient was applied for the analysis of correlation. The standardized response mean (SRM) and Guyatt's responsiveness index (GRI) were used to analyze responsiveness. Spearman's correlation coefficient showed a significantly high correlation (ρ=0.91-0.96) among FMA, MSS, and ARAT and a fair-to-moderate correlation (ρ=0.40-0.62) between MAS and the other assessments. FMA, MSS, and MAS on the wrist showed higher responsiveness (SRM=0.85-0.98, GRI=1.59-3.62), whereas ARAT showed relatively less responsiveness (SRM=0.22, GRI=0.81). The results showed that FMA or MSS would be the best choice for evaluating the functional improvement in stroke studies on robotic upper-extremity training with high responsiveness and good correlation with ARAT. MAS could be used separately to evaluate the spasticity changes after intervention in terms of high responsiveness.
You mob my owl, I'll mob yours: birds play tit-for-tat game.
Krama, Tatjana; Vrublevska, Jolanta; Freeberg, Todd M; Kullberg, Cecilia; Rantala, Markus J; Krams, Indrikis
2012-01-01
Reciprocity is fundamental to cooperative behaviour and has been verified in theoretical models. However, there is still limited experimental evidence for reciprocity in non-primate species. Our results more decisively clarify that reciprocity with a tit-for-tat enforcement strategy can occur among breeding pied flycatchers Ficedula hypoleuca separate from considerations of byproduct mutualism. Breeding pairs living in close proximity (20-24 m) did exhibit byproduct mutualism and always assisted in mobbing regardless of their neighbours' prior actions. However, breeding pairs with distant neighbours (69-84 m) either assisted or refused to assist in mobbing a predatory owl based on whether or not the distant pair had previously helped them in their own nest defense against the predator. Clearly, these birds are aware of their specific spatial security context, remember their neighbours' prior behaviour, and choose a situation-specific strategic course of action, which could promote their longer-term security, a capacity previously thought unique to primates.
You mob my owl, I'll mob yours: birds play tit-for-tat game
Krama, Tatjana; Vrublevska, Jolanta; Freeberg, Todd M.; Kullberg, Cecilia; Rantala, Markus J.; Krams, Indrikis
2012-01-01
Reciprocity is fundamental to cooperative behaviour and has been verified in theoretical models. However, there is still limited experimental evidence for reciprocity in non-primate species. Our results more decisively clarify that reciprocity with a tit-for-tat enforcement strategy can occur among breeding pied flycatchers Ficedula hypoleuca separate from considerations of byproduct mutualism. Breeding pairs living in close proximity (20–24 m) did exhibit byproduct mutualism and always assisted in mobbing regardless of their neighbours' prior actions. However, breeding pairs with distant neighbours (69–84 m) either assisted or refused to assist in mobbing a predatory owl based on whether or not the distant pair had previously helped them in their own nest defense against the predator. Clearly, these birds are aware of their specific spatial security context, remember their neighbours' prior behaviour, and choose a situation-specific strategic course of action, which could promote their longer-term security, a capacity previously thought unique to primates. PMID:23150772
Integrated Attitude Control Strategy for the Asteroid Redirect Mission
NASA Technical Reports Server (NTRS)
Lopez, Pedro, Jr.; Price, Hoppy; San Martin, Miguel
2014-01-01
A deep-space mission has been proposed to redirect an asteroid to a distant retrograde orbit around the moon using a robotic vehicle, the Asteroid Redirect Vehicle (ARV). In this orbit, astronauts will rendezvous with the ARV using the Orion spacecraft. The integrated attitude control concept that Orion will use for approach and docking and for mated operations will be described. Details of the ARV's attitude control system and its associated constraints for redirecting the asteroid to the distant retrograde orbit around the moon will be provided. Once Orion is docked to the ARV, an overall description of the mated stack attitude during all phases of the mission will be presented using a coordinate system that was developed for this mission. Next, the thermal and power constraints of both the ARV and Orion will be discussed as well as how they are used to define the optimal integrated stack attitude. Lastly, the lighting and communications constraints necessary for the crew's extravehicular activity planned to retrieve samples from the asteroid will be examined. Similarly, the joint attitude control strategy that employs both the Orion and the ARV attitude control assets prior, during, and after each extravehicular activity will also be thoroughly discussed.
Design of a biomimetic robotic octopus arm.
Laschi, C; Mazzolai, B; Mattoli, V; Cianchetti, M; Dario, P
2009-03-01
This paper reports the rationale and design of a robotic arm, as inspired by an octopus arm. The octopus arm shows peculiar features, such as the ability to bend in all directions, to produce fast elongations, and to vary its stiffness. The octopus achieves these unique motor skills, thanks to its peculiar muscular structure, named muscular hydrostat. Different muscles arranged on orthogonal planes generate an antagonistic action on each other in the muscular hydrostat, which does not change its volume during muscle contractions, and allow bending and elongation of the arm and stiffness variation. By drawing inspiration from natural skills of octopus, and by analysing the geometry and mechanics of the muscular structure of its arm, we propose the design of a robot arm consisting of an artificial muscular hydrostat structure, which is completely soft and compliant, but also able to stiffen. In this paper, we discuss the design criteria of the robotic arm and how this design and the special arrangement of its muscular structure may bring the building of a robotic arm into being, by showing the results obtained by mathematical models and prototypical mock-ups.
Virtual Reality Robotic Operation Simulations Using MEMICA Haptic System
NASA Technical Reports Server (NTRS)
Bar-Cohen, Y.; Mavroidis, C.; Bouzit, M.; Dolgin, B.; Harm, D. L.; Kopchok, G. E.; White, R.
2000-01-01
There is an increasing realization that some tasks can be performed significantly better by humans than robots but, due to associated hazards, distance, etc., only a robot can be employed. Telemedicine is one area where remotely controlled robots can have a major impact by providing urgent care at remote sites. In recent years, remotely controlled robotics has been greatly advanced. The robotic astronaut, "Robonaut," at NASA Johnson Space Center is one such example. Unfortunately, due to the unavailability of force and tactile feedback capability the operator must determine the required action using only visual feedback from the remote site, which limits the tasks that Robonaut can perform. There is a great need for dexterous, fast, accurate teleoperated robots with the operator?s ability to "feel" the environment at the robot's field. Recently, we conceived a haptic mechanism called MEMICA (Remote MEchanical MIrroring using Controlled stiffness and Actuators) that can enable the design of high dexterity, rapid response, and large workspace system. Our team is developing novel MEMICA gloves and virtual reality models to allow the simulation of telesurgery and other applications. The MEMICA gloves are designed to have a high dexterity, rapid response, and large workspace and intuitively mirror the conditions at a virtual site where a robot is simulating the presence of the human operator. The key components of MEMICA are miniature electrically controlled stiffness (ECS) elements and Electrically Controlled Force and Stiffness (ECFS) actuators that are based on the sue of Electro-Rheological Fluids (ERF). In this paper the design of the MEMICA system and initial experimental results are presented.
Cognitive patterns: giving autonomy some context
NASA Astrophysics Data System (ADS)
Dumond, Danielle; Stacy, Webb; Geyer, Alexandra; Rousseau, Jeffrey; Therrien, Mike
2013-05-01
Today's robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low, detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience and, therefore, do not become better with time or apply previous knowledge to new situations. With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor management while operating single robots, are also required to manage inter-robot interactions. To make the most use of robots in combat environments, it will be necessary to have the capability to assign them new missions (including providing them context information), and to have them report information about the environment they encounter as they proceed with their mission. The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components: Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns, match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator to focus on things other than the operation of the robot(s).
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1998-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Human-robot skills transfer interfaces for a flexible surgical robot.
Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G
2014-09-01
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ODYSSEUS autonomous walking robot: The leg/arm design
NASA Technical Reports Server (NTRS)
Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.
1994-01-01
ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.
NASA Astrophysics Data System (ADS)
Kotani, Naoki; Taniguchi, Kenji
An efficient learning method using Fuzzy ART with Genetic Algorithm is proposed. The proposed method reduces the number of trials by using a policy acquired in other tasks because a reinforcement learning needs a lot of the number of trials until an agent acquires appropriate actions. Fuzzy ART is an incremental unsupervised learning algorithm in responce to arbitrary sequences of analog or binary input vectors. Our proposed method gives a policy by crossover or mutation when an agent observes unknown states. Selection controls the category proliferation problem of Fuzzy ART. The effectiveness of the proposed method was verified with the simulation of the reaching problem for the two-link robot arm. The proposed method achieves a reduction of both the number of trials and the number of states.
NASA Astrophysics Data System (ADS)
Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.
2012-10-01
We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.
Beyond adaptive-critic creative learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Liao, Xiaoqun; Cao, Ming; Hall, Ernest L.
2001-10-01
Intelligent industrial and mobile robots may be considered proven technology in structured environments. Teach programming and supervised learning methods permit solutions to a variety of applications. However, we believe that to extend the operation of these machines to more unstructured environments requires a new learning method. Both unsupervised learning and reinforcement learning are potential candidates for these new tasks. The adaptive critic method has been shown to provide useful approximations or even optimal control policies to non-linear systems. The purpose of this paper is to explore the use of new learning methods that goes beyond the adaptive critic method for unstructured environments. The adaptive critic is a form of reinforcement learning. A critic element provides only high level grading corrections to a cognition module that controls the action module. In the proposed system the critic's grades are modeled and forecasted, so that an anticipated set of sub-grades are available to the cognition model. The forecasting grades are interpolated and are available on the time scale needed by the action model. The success of the system is highly dependent on the accuracy of the forecasted grades and adaptability of the action module. Examples from the guidance of a mobile robot are provided to illustrate the method for simple line following and for the more complex navigation and control in an unstructured environment. The theory presented that is beyond the adaptive critic may be called creative theory. Creative theory is a form of learning that models the highest level of human learning - imagination. The application of the creative theory appears to not only be to mobile robots but also to many other forms of human endeavor such as educational learning and business forecasting. Reinforcement learning such as the adaptive critic may be applied to known problems to aid in the discovery of their solutions. The significance of creative theory is that it permits the discovery of the unknown problems, ones that are not yet recognized but may be critical to survival or success.
Integrating Reading and Language Arts.
ERIC Educational Resources Information Center
French, Michael P., Ed.; Elford, Shirley J., Ed.
1986-01-01
Integrating reading and language arts at all levels is the focus of this journal issue. The articles and their authors are as follows: "Reading and Writing: Close Relatives or Distant Cousins" (Kathryn A. Koch); "The Reading-Writing Relationship: Myths and Realities" (Timothy Shanahan); "The Classroom Teacher as an Action Researcher: Beginning…
Counterfactual quantum erasure: spooky action without entanglement
NASA Astrophysics Data System (ADS)
Salih, Hatim
2018-02-01
We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice's photon, dramatically restoring interference-without previously shared entanglement, and without Alice's photon ever leaving her laboratory.
Counterfactual quantum erasure: spooky action without entanglement.
Salih, Hatim
2018-02-01
We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice's photon, dramatically restoring interference-without previously shared entanglement, and without Alice's photon ever leaving her laboratory.
Human Space Exploration and Human Space Flight: Latency and the Cognitive Scale of the Universe
NASA Technical Reports Server (NTRS)
Lester, Dan; Thronson, Harley
2011-01-01
The role of telerobotics in space exploration as placing human cognition on other worlds is limited almost entirely by the speed of light, and the consequent communications latency that results from large distances. This latency is the time delay between the human brain at one end, and the telerobotic effector and sensor at the other end. While telerobotics and virtual presence is a technology that is rapidly becoming more sophisticated, with strong commercial interest on the Earth, this time delay, along with the neurological timescale of a human being, quantitatively defines the cognitive horizon for any locale in space. That is, how distant can an operator be from a robot and not be significantly impacted by latency? We explore that cognitive timescale of the universe, and consider the implications for telerobotics, human space flight, and participation by larger numbers of people in space exploration. We conclude that, with advanced telepresence, sophisticated robots could be operated with high cognition throughout a lunar hemisphere by astronauts within a station at an Earth-Moon Ll or L2 venue. Likewise, complex telerobotic servicing of satellites in geosynchronous orbit can be carried out from suitable terrestrial stations.
Distant Site Effects of Ingested Prebiotics
Collins, Stephanie; Reid, Gregor
2016-01-01
The gut microbiome is being more widely recognized for its association with positive health outcomes, including those distant to the gastrointestinal system. This has given the ability to maintain and restore microbial homeostasis a new significance. Prebiotic compounds are appealing for this purpose as they are generally food-grade substances only degraded by microbes, such as bifidobacteria and lactobacilli, from which beneficial short-chain fatty acids are produced. Saccharides such as inulin and other fructo-oligosaccharides, galactooligosaccharides, and polydextrose have been widely used to improve gastrointestinal outcomes, but they appear to also influence distant sites. This review examined the effects of prebiotics on bone strength, neural and cognitive processes, immune functioning, skin, and serum lipid profile. The mode of action is in part affected by intestinal permeability and by fermentation products reaching target cells. As the types of prebiotics available diversify, so too will our understanding of the range of microbes able to degrade them, and the extent to which body sites can be impacted by their consumption. PMID:27571098
Distant Site Effects of Ingested Prebiotics.
Collins, Stephanie; Reid, Gregor
2016-08-26
The gut microbiome is being more widely recognized for its association with positive health outcomes, including those distant to the gastrointestinal system. This has given the ability to maintain and restore microbial homeostasis a new significance. Prebiotic compounds are appealing for this purpose as they are generally food-grade substances only degraded by microbes, such as bifidobacteria and lactobacilli, from which beneficial short-chain fatty acids are produced. Saccharides such as inulin and other fructo-oligosaccharides, galactooligosaccharides, and polydextrose have been widely used to improve gastrointestinal outcomes, but they appear to also influence distant sites. This review examined the effects of prebiotics on bone strength, neural and cognitive processes, immune functioning, skin, and serum lipid profile. The mode of action is in part affected by intestinal permeability and by fermentation products reaching target cells. As the types of prebiotics available diversify, so too will our understanding of the range of microbes able to degrade them, and the extent to which body sites can be impacted by their consumption.
Social Transmission of Experience of Agency: An Experimental Study
Khalighinejad, Nima; Bahrami, Bahador; Caspar, Emilie A.; Haggard, Patrick
2016-01-01
The sense of controlling one’s own actions is fundamental to normal human mental function, and also underlies concepts of social responsibility for action. However, it remains unclear how the wider social context of human action influences sense of agency. Using a simple experimental design, we investigated, for the first time, how observing the action of another person or a robot could potentially influence one’s own sense of agency. We assessed how observing another’s action might change the perceived temporal relationship between one’s own voluntary actions and their outcomes, which has been proposed as an implicit measure of sense of agency. Working in pairs, participants chose between two action alternatives, one rewarded more frequently than the other, while watching a rotating clock hand. They judged, in separate blocks, either the time of their own action, or the time of a tone that followed the action. These were compared to baseline judgements of actions alone, or tones alone, to calculate the perceptual shift of action toward outcome and vice versa. Our design focused on how these two dependent variables, which jointly provide an implicit measure of sense of agency, might be influenced by observing another’s action. In the observational group, each participant could see the other’s actions. Multivariate analysis showed that the perceived time of action and tone shifted progressively toward the actual time of outcome with repeated experience of this social situation. No such progressive change occurred in other groups for whom a barrier hid participants’ actions from each other. However, a similar effect was observed in the group that viewed movements of a human-like robotic hand, rather than actions of another person. This finding suggests that observing the actions of others increases the salience of the external outcomes of action and this effect is not unique to observing human agents. Social contexts in which we see others controlling external events may play an important role in mentally representing the impact of our own actions on the external world. PMID:27625626
The Initial Development of Object Knowledge by a Learning Robot
Modayil, Joseph; Kuipers, Benjamin
2008-01-01
We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control. PMID:19953188
Small Business Innovations (Exoskeletons)
NASA Technical Reports Server (NTRS)
1992-01-01
The Dexterous Hand Master (DHM), a 1989 winner of an R&D 100 Award, is an exoskeleton device for measuring the joints of the human hand with extreme precision. It was originally developed for NASA by Arthur D. Little, and is sold commercially by EXOS, Inc. The DHM is worn on the hand and connected to a computer that records hand motions. The resulting data is transmitted as control signals to robots and other computers, enabling robotic hands to emulate human hand actions. Two additional spinoff products were also inspired by the DHM.
The Human Touch: Practical and Ethical Implications of Putting AI and Robotics to Work for Patients.
Banks, Jim
2018-01-01
We live in a time when science fiction can quickly become science fact. Within a generation, the Internet has matured from a technological marvel to a utility, and mobile telephones have redefined how we communicate. Health care, as an industry, is quick to embrace technology, so it is no surprise that the application of programmable robotic systems that can carry out actions automatically and artificial intelligence (AI), e.g., machines that learn, solve problems, and respond to their environment, is being keenly explored.
Learning models of Human-Robot Interaction from small data
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G.; Heinz, Jeffrey
2018-01-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets. PMID:29492408
Learning models of Human-Robot Interaction from small data.
Zehfroosh, Ashkan; Kokkoni, Elena; Tanner, Herbert G; Heinz, Jeffrey
2017-07-01
This paper offers a new approach to learning discrete models for human-robot interaction (HRI) from small data. In the motivating application, HRI is an integral part of a pediatric rehabilitation paradigm that involves a play-based, social environment aiming at improving mobility for infants with mobility impairments. Designing interfaces in this setting is challenging, because in order to harness, and eventually automate, the social interaction between children and robots, a behavioral model capturing the causality between robot actions and child reactions is needed. The paper adopts a Markov decision process (MDP) as such a model, and selects the transition probabilities through an empirical approximation procedure called smoothing. Smoothing has been successfully applied in natural language processing (NLP) and identification where, similarly to the current paradigm, learning from small data sets is crucial. The goal of this paper is two-fold: (i) to describe our application of HRI, and (ii) to provide evidence that supports the application of smoothing for small data sets.
Unmanned and Unattended Response Capability for Homeland Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
BENNETT, PHIL C.
2002-11-01
An analysis was conducted of the potential for unmanned and unattended robotic technologies for forward-based, immediate response capabilities that enables access and controlled task performance. The authors analyze high-impact response scenarios in conjunction with homeland security organizations, such as the NNSA Office of Emergency Response, the FBI, the National Guard, and the Army Technical Escort Unit, to cover a range of radiological, chemical and biological threats. They conducted an analysis of the potential of forward-based, unmanned and unattended robotic technologies to accelerate and enhance emergency and crisis response by Homeland Defense organizations. Response systems concepts were developed utilizing new technologiesmore » supported by existing emerging threats base technologies to meet the defined response scenarios. These systems will pre-position robotic and remote sensing capabilities stationed close to multiple sites for immediate action. Analysis of assembled systems included experimental activities to determine potential efficacy in the response scenarios, and iteration on systems concepts and remote sensing and robotic technologies, creating new immediate response capabilities for Homeland Defense.« less
Autonomous Shepherding Behaviors of Multiple Target Steering Robots.
Lee, Wonki; Kim, DaeEun
2017-11-25
This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots' position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach.
Rubenstein, Michael; Sai, Ying; Chuong, Cheng-Ming; Shen, Wei-Min
2009-01-01
This paper presents a novel perspective of Robotic Stem Cells (RSCs), defined as the basic non-biological elements with stem cell like properties that can self-reorganize to repair damage to their swarming organization. Self here means that the elements can autonomously decide and execute their actions without requiring any preset triggers, commands, or help from external sources. We develop this concept for two purposes. One is to develop a new theory for self-organization and self-assembly of multi-robots systems that can detect and recover from unforeseen errors or attacks. This self-healing and self-regeneration is used to minimize the compromise of overall function for the robot team. The other is to decipher the basic algorithms of regenerative behaviors in multi-cellular animal models, so that we can understand the fundamental principles used in the regeneration of biological systems. RSCs are envisioned to be basic building elements for future systems that are capable of self-organization, self-assembly, self-healing and self-regeneration. We first discuss the essential features of biological stem cells for such a purpose, and then propose the functional requirements of robotic stem cells with properties equivalent to gene controller, program selector and executor. We show that RSCs are a novel robotic model for scalable self-organization and self-healing in computer simulations and physical implementation. As our understanding of stem cells advances, we expect that future robots will be more versatile, resilient and complex, and such new robotic systems may also demand and inspire new knowledge from stem cell biology and related fields, such as artificial intelligence and tissue engineering.
RUBENSTEIN, MICHAEL; SAI, YING; CHUONG, CHENG-MING; SHEN, WEI-MIN
2010-01-01
This paper presents a novel perspective of Robotic Stem Cells (RSCs), defined as the basic non-biological elements with stem cell like properties that can self-reorganize to repair damage to their swarming organization. “Self” here means that the elements can autonomously decide and execute their actions without requiring any preset triggers, commands, or help from external sources. We develop this concept for two purposes. One is to develop a new theory for self-organization and self-assembly of multi-robots systems that can detect and recover from unforeseen errors or attacks. This self-healing and self-regeneration is used to minimize the compromise of overall function for the robot team. The other is to decipher the basic algorithms of regenerative behaviors in multi-cellular animal models, so that we can understand the fundamental principles used in the regeneration of biological systems. RSCs are envisioned to be basic building elements for future systems that are capable of self-organization, self-assembly, self-healing and self-regeneration. We first discuss the essential features of biological stem cells for such a purpose, and then propose the functional requirements of robotic stem cells with properties equivalent to gene controller, program selector and executor. We show that RSCs are a novel robotic model for scalable self-organization and self-healing in computer simulations and physical implementation. As our understanding of stem cells advances, we expect that future robots will be more versatile, resilient and complex, and such new robotic systems may also demand and inspire new knowledge from stem cell biology and related fields, such as artificial intelligence and tissue engineering. PMID:19557691
Robotic action acquisition with cognitive biases in coarse-grained state space.
Uragami, Daisuke; Kohno, Yu; Takahashi, Tatsuji
2016-07-01
Some of the authors have previously proposed a cognitively inspired reinforcement learning architecture (LS-Q) that mimics cognitive biases in humans. LS-Q adaptively learns under uniform, coarse-grained state division and performs well without parameter tuning in a giant-swing robot task. However, these results were shown only in simulations. In this study, we test the validity of the LS-Q implemented in a robot in a real environment. In addition, we analyze the learning process to elucidate the mechanism by which the LS-Q adaptively learns under the partially observable environment. We argue that the LS-Q may be a versatile reinforcement learning architecture, which is, despite its simplicity, easily applicable and does not require well-prepared settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Learning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learning.
Fernandez-Gauna, Borja; Etxeberria-Agiriano, Ismael; Graña, Manuel
2015-01-01
Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.
Understanding the internal states of others by listening to action verbs.
Di Cesare, G; Fasano, F; Errante, A; Marchi, M; Rizzolatti, G
2016-08-01
The internal state of others can be understood observing their actions or listening to their voice. While the neural bases of action style (vitality forms) have been investigated, there is no information on how we recognize others' internal state by listening to their speech. Here, using fMRI technique, we investigated the neural correlates of auditory vitality forms while participants listened to action verbs in three different conditions: human voice pronouncing the verbs in a rude and gentle way, robot voice pronouncing the same verbs without vitality forms, and a scrambled version of the same verbs pronounced by human voice. In agreement with previous studies on vitality forms encoding, we found specific activation of the central part of insula during listening to human voice conveying specific vitality forms. In addition, when listening both to human and robot voices there was an activation of the posterior part of the left inferior frontal gyrus and of the parieto-premotor circuit typically described to be activated during observation and execution of arm actions. Finally, the superior temporal gyrus was activated bilaterally in all three conditions. We conclude that, the central part of insula is a key region for vitality forms processing allowing the understanding of the vitality forms regardless of the modality by which they are conveyed. Copyright © 2016. Published by Elsevier Ltd.
An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger
Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico
2016-01-01
In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs. PMID:27891088
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.
1998-01-01
This paper reports a comparison between Geotail observations of plasmas and magnetic fields at 200 R(sub E) in the Earth's magnetotail with results from a time-dependent, global magnetohydrodynamic simulation of the interaction of the solar wind with the magnetosphere. The study focuses on observations from July 7, 1993, during which the Geotail spacecraft crossed the distant tail magnetospheric boundary several times while the interplanetary magnetic field (IMF) was predominantly northward and was marked by slow rotations of its clock angle. Simultaneous IMP 8 observations of solar wind ions and the IMF were used as driving input for the MHD simulation, and the resulting time series were compared directly with those from the Geotail spacecraft. The very good agreement found provided the basis for an investigation of the response of the distant tail associated with the clock angle of the IMF. Results from the simulation show that the stresses imposed by the draping of magnetosheath field lines and the asymmetric removal of magnetic flux tailward of the cusps altered considerably the shape of the distant tail as the solar wind discontinuities convected downstream of Earth. As a result, the cross section of the distant tail was considerably flattened along the direction perpendicular to the IMF clock angle, the direction of the neutral sheet following that of the IMF. The simulation also revealed that the combined action of magnetic reconnection and the slow rotation of the IMF clock angle led to a braiding of the distant tail's magnetic field lines along the axis of the tail, with the plane of the braid lying in the direction of the IMF.
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra
2010-01-01
Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777
Situationally driven local navigation for mobile robots. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Slack, Marc Glenn
1990-01-01
For mobile robots to autonomously accommodate dynamically changing navigation tasks in a goal-directed fashion, they must employ navigation plans. Any such plan must provide for the robot's immediate and continuous need for guidance while remaining highly flexible in order to avoid costly computation each time the robot's perception of the world changes. Due to the world's uncertainties, creation and maintenance of navigation plans cannot involve arbitrarily complex processes, as the robot's perception of the world will be in constant flux, requiring modifications to be made quickly if they are to be of any use. This work introduces navigation templates (NaT's) which are building blocks for the construction and maintenance of rough navigation plans which capture the relationship that objects in the world have to the current navigation task. By encoding only the critical relationship between the objects in the world and the navigation task, a NaT-based navigation plan is highly flexible; allowing new constraints to be quickly incorporated into the plan and existing constraints to be updated or deleted from the plan. To satisfy the robot's need for immediate local guidance, the NaT's forming the current navigation plan are passed to a transformation function. The transformation function analyzes the plan with respect to the robot's current location to quickly determine (a few times a second) the locally preferred direction of travel. This dissertation presents NaT's and the transformation function as well as the needed support systems to demonstrate the usefulness of the technique for controlling the actions of a mobile robot operating in an uncertain world.
NASA's Asteroid Redirect Mission: The Boulder Capture Option
NASA Technical Reports Server (NTRS)
Abell, Paul A.; Nuth, J.; Mazanek, D.; Merrill, R.; Reeves, D.; Naasz, B.
2014-01-01
NASA is examining two options for the Asteroid Redirect Mission (ARM), which will return asteroid material to a Lunar Distant Retrograde Orbit (LDRO) using a robotic solar-electric-propulsion spacecraft, called the Asteroid Redirect Vehicle (ARV). Once the ARV places the asteroid material into the LDRO, a piloted mission will rendezvous and dock with the ARV. After docking, astronauts will conduct two extravehicular activities (EVAs) to inspect and sample the asteroid material before returning to Earth. One option involves capturing an entire small (approximately 4-10 m diameter) near-Earth asteroid (NEA) inside a large inflatable bag. However, NASA is examining another option that entails retrieving a boulder (approximately 1-5 m) via robotic manipulators from the surface of a larger (approximately 100+ m) pre-characterized NEA. This option can leverage robotic mission data to help ensure success by targeting previously (or soon to be) well-characterized NEAs. For example, the data from the Hayabusa mission has been utilized to develop detailed mission designs that assess options and risks associated with proximity and surface operations. Hayabusa's target NEA, Itokawa, has been identified as a valid target and is known to possess hundreds of appropriately sized boulders on its surface. Further robotic characterization of additional NEAs (e.g., Bennu and 1999 JU3) by NASA's OSIRIS REx and JAXA's Hayabusa 2 missions is planned to begin in 2018. The boulder option is an extremely large sample-return mission with the prospect of bringing back many tons of well-characterized asteroid material to the Earth-Moon system. The candidate boulder from the target NEA can be selected based on inputs from the world-wide science community, ensuring that the most scientifically interesting boulder be returned for subsequent sampling. This boulder option for NASA's ARM can leverage knowledge of previously characterized NEAs from prior robotic missions, which provides more certainty of the target NEA's physical characteristics and reduces mission risk. This increases the return on investment for NASA's future activities with respect to science, human exploration, resource utilization, and planetary defense
NASA’s Asteroid Redirect Mission: The Boulder Capture Option
NASA Astrophysics Data System (ADS)
Abell, Paul; Nuth, Joseph A.; Mazanek, Dan D.; Merrill, Raymond G.; Reeves, David M.; Naasz, Bo J.
2014-11-01
NASA is examining two options for the Asteroid Redirect Mission (ARM), which will return asteroid material to a Lunar Distant Retrograde Orbit (LDRO) using a robotic solar-electric-propulsion spacecraft, called the Asteroid Redirect Vehicle (ARV). Once the ARV places the asteroid material into the LDRO, a piloted mission will rendezvous and dock with the ARV. After docking, astronauts will conduct two extravehicular activities (EVAs) to inspect and sample the asteroid material before returning to Earth. One option involves capturing an entire small (˜4-10 m diameter) near-Earth asteroid (NEA) inside a large inflatable bag. However, NASA is examining another option that entails retrieving a boulder (˜1-5 m) via robotic manipulators from the surface of a larger (˜100+ m) pre-characterized NEA. This option can leverage robotic mission data to help ensure success by targeting previously (or soon to be) well-characterized NEAs. For example, the data from the Hayabusa mission has been utilized to develop detailed mission designs that assess options and risks associated with proximity and surface operations. Hayabusa’s target NEA, Itokawa, has been identified as a valid target and is known to possess hundreds of appropriately sized boulders on its surface. Further robotic characterization of additional NEAs (e.g., Bennu and 1999 JU3) by NASA’s OSIRIS REx and JAXA’s Hayabusa 2 missions is planned to begin in 2018. The boulder option is an extremely large sample-return mission with the prospect of bringing back many tons of well-characterized asteroid material to the Earth-Moon system. The candidate boulder from the target NEA can be selected based on inputs from the world-wide science community, ensuring that the most scientifically interesting boulder be returned for subsequent sampling. This boulder option for NASA’s ARM can leverage knowledge of previously characterized NEAs from prior robotic missions, which provides more certainty of the target NEA’s physical characteristics and reduces mission risk. This increases the return on investment for NASA’s future activities with respect to science, human exploration, resource utilization, and planetary defense.
High-Temperature, Thin-Film Ceramic Thermocouples Developed
NASA Technical Reports Server (NTRS)
Sayir, Ali; Blaha, Charles A.; Gonzalez, Jose M.
2005-01-01
To enable long-duration, more distant human and robotic missions for the Vision for Space Exploration, as well as safer, lighter, quieter, and more fuel efficient vehicles for aeronautics and space transportation, NASA is developing instrumentation and material technologies. The high-temperature capabilities of thin-film ceramic thermocouples are being explored at the NASA Glenn Research Center by the Sensors and Electronics Branch and the Ceramics Branch in partnership with Case Western Reserve University (CWRU). Glenn s Sensors and Electronics Branch is developing thin-film sensors for surface measurement of strain, temperature, heat flux, and surface flow in propulsion system research. Glenn s Ceramics Branch, in conjunction with CWRU, is developing structural and functional ceramic technology for aeropropulsion and space propulsion.
The role of intrinsic motivations in attention allocation and shifting
Di Nocera, Dario; Finzi, Alberto; Rossi, Silvia; Staffa, Mariacarla
2014-01-01
The concepts of attention and intrinsic motivations are of great interest within adaptive robotic systems, and can be exploited in order to guide, activate, and coordinate multiple concurrent behaviors. Attention allocation strategies represent key capabilities of human beings, which are strictly connected with action selection and execution mechanisms, while intrinsic motivations directly affect the allocation of attentional resources. In this paper we propose a model of Reinforcement Learning (RL), where both these capabilities are involved. RL is deployed to learn how to allocate attentional resources in a behavior-based robotic system, while action selection is obtained as a side effect of the resulting motivated attentional behaviors. Moreover, the influence of intrinsic motivations in attention orientation is obtained by introducing rewards associated with curiosity drives. In this way, the learning process is affected not only by goal-specific rewards, but also by intrinsic motivations. PMID:24744746
Decision support systems for robotic surgery and acute care
NASA Astrophysics Data System (ADS)
Kazanzides, Peter
2012-06-01
Doctors must frequently make decisions during medical treatment, whether in an acute care facility, such as an Intensive Care Unit (ICU), or in an operating room. These decisions rely on a various information sources, such as the patient's medical history, preoperative images, and general medical knowledge. Decision support systems can assist by facilitating access to this information when and where it is needed. This paper presents some research eorts that address the integration of information with clinical practice. The example systems include a clinical decision support system (CDSS) for pediatric traumatic brain injury, an augmented reality head- mounted display for neurosurgery, and an augmented reality telerobotic system for minimally-invasive surgery. While these are dierent systems and applications, they share the common theme of providing information to support clinical decisions and actions, whether the actions are performed with the surgeon's own hands or with robotic assistance.
Udoekwere, Ubong I.; Oza, Chintan S.
2016-01-01
Robot therapy promotes functional recovery after spinal cord injury (SCI) in animal and clinical studies. Trunk actions are important in adult rats spinalized as neonates (NTX rats) that walk autonomously. Quadrupedal robot rehabilitation was tested using an implanted orthosis at the pelvis. Trunk cortical reorganization follows such rehabilitation. Here, we test the functional outcomes of such training. Robot impedance control at the pelvis allowed hindlimb, trunk, and forelimb mechanical interactions. Rats gradually increased weight support. Rats showed significant improvement in hindlimb stepping ability, quadrupedal weight support, and all measures examined. Function in NTX rats both before and after training showed bimodal distributions, with “poor” and “high weight support” groupings. A total of 35% of rats initially classified as “poor” were able to increase their weight-supported step measures to a level considered “high weight support” after robot training, thus moving between weight support groups. Recovered function in these rats persisted on treadmill with the robot both actuated and nonactuated, but returned to pretraining levels if they were completely disconnected from the robot. Locomotor recovery in robot rehabilitation of NTX rats thus likely included context dependence and/or incorporation of models of robot mechanics that became essential parts of their learned strategy. Such learned dependence is likely a hurdle to autonomy to be overcome for many robot locomotor therapies. Notwithstanding these limitations, trunk-based quadrupedal robot rehabilitation helped the rats to visit mechanical states they would never have achieved alone, to learn novel coordinations, and to achieve major improvements in locomotor function. SIGNIFICANCE STATEMENT Neonatal spinal transected rats without any weight support can be taught weight support as adults by using robot rehabilitation at trunk. No adult control rats with neonatal spinal transections spontaneously achieve similar changes. The robot rehabilitation system can be inactivated and the skills that were learned persist. Responding rats cannot be detached from the robot altogether, a dependence develops in the skill learned. From data and analysis here, the likelihood of such rats to respond to the robot therapy can also now be predicted. These results are all novel. Understanding trunk roles in voluntary and spinal reflex integration after spinal cord injury and in recovery of function are broadly significant for basic and clinical understanding of motor function. PMID:27511008
Udoekwere, Ubong I; Oza, Chintan S; Giszter, Simon F
2016-08-10
Robot therapy promotes functional recovery after spinal cord injury (SCI) in animal and clinical studies. Trunk actions are important in adult rats spinalized as neonates (NTX rats) that walk autonomously. Quadrupedal robot rehabilitation was tested using an implanted orthosis at the pelvis. Trunk cortical reorganization follows such rehabilitation. Here, we test the functional outcomes of such training. Robot impedance control at the pelvis allowed hindlimb, trunk, and forelimb mechanical interactions. Rats gradually increased weight support. Rats showed significant improvement in hindlimb stepping ability, quadrupedal weight support, and all measures examined. Function in NTX rats both before and after training showed bimodal distributions, with "poor" and "high weight support" groupings. A total of 35% of rats initially classified as "poor" were able to increase their weight-supported step measures to a level considered "high weight support" after robot training, thus moving between weight support groups. Recovered function in these rats persisted on treadmill with the robot both actuated and nonactuated, but returned to pretraining levels if they were completely disconnected from the robot. Locomotor recovery in robot rehabilitation of NTX rats thus likely included context dependence and/or incorporation of models of robot mechanics that became essential parts of their learned strategy. Such learned dependence is likely a hurdle to autonomy to be overcome for many robot locomotor therapies. Notwithstanding these limitations, trunk-based quadrupedal robot rehabilitation helped the rats to visit mechanical states they would never have achieved alone, to learn novel coordinations, and to achieve major improvements in locomotor function. Neonatal spinal transected rats without any weight support can be taught weight support as adults by using robot rehabilitation at trunk. No adult control rats with neonatal spinal transections spontaneously achieve similar changes. The robot rehabilitation system can be inactivated and the skills that were learned persist. Responding rats cannot be detached from the robot altogether, a dependence develops in the skill learned. From data and analysis here, the likelihood of such rats to respond to the robot therapy can also now be predicted. These results are all novel. Understanding trunk roles in voluntary and spinal reflex integration after spinal cord injury and in recovery of function are broadly significant for basic and clinical understanding of motor function. Copyright © 2016 the authors 0270-6474/16/368341-15$15.00/0.
Robots with a gentle touch: advances in assistive robotics and prosthetics.
Harwin, W S
1999-01-01
As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.
A neurorobotic platform for locomotor prosthetic development in rats and mice
NASA Astrophysics Data System (ADS)
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
Objectives. We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. Approach. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Main Results. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. Significance. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
A neurorobotic platform for locomotor prosthetic development in rats and mice.
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
NASA Technical Reports Server (NTRS)
Porter, Derrick
2014-01-01
The Mission Operations Directorate (MOD) is responsible for the training, planning and performance of all U.S. manned operations in space. Within this directorate all responsibilities are divided up into divisions. The EVA, Robotics & Crew Systems Operations Division performs ground operations and trains astronauts to carry out some of the more "high action" procedures in space. For example they orchestrate procedures like EVAs, or ExtraVehicular Activities (spacewalks), and robotics operations external to the International Space Station (ISS). The robotics branch of this division is responsible for the use of the Mobile Servicing System (MSS). This system is a combination of two robotic mechanisms and a series of equipment used to transport them on the ISS. The MSS is used to capture and position visiting vehicles, transport astronauts during EVAs, and perform external maintenance tasks on the ISS. This branch consists of two groups which are responsible for crew training and flight controlling, respectively. My first co-op tour took place Fall 2013. During this time I was given the opportunity to work in the robotics operations branch of the Mission Operations Directorate at NASA's Johnson Space Center. I was given a variety of tasks that encompassed, at a base level, all the aspects of the branch.
A concept for ubiquitous robotics in industrial environment
NASA Astrophysics Data System (ADS)
Sallinen, Mikko; Heilala, Juhani; Kivikunnas, Sauli
2007-09-01
In this paper a concept for industrial ubiquitous robotics is presented. The concept combines two different approaches to manage agile, adaptable production: firstly the human operator is strongly in the production loop and secondly, the robot workcell will be more autonomous and smarter to manage production. This kind of autonomous robot cell can be called production island. Communication to the human operator working in this kind of smart industrial environment can be divided into two levels: body area communication and operator-infrastructure communication including devices, machines and infra. Body area communication can be supportive in two directions: data is recorded by means of measuring physical actions, such as hand movements, body gestures or supportive when it will provide information to user such as guides or manuals for operation. Body area communication can be carried out using short range communication technologies such as NFC (Near Field communication) which is RFID type of communication. In the operator-infrastructure communication, WLAN or Bluetooth -communication can be used. Beyond the current Human Machine interaction HMI systems, the presented system concept is designed to fulfill the requirements for hybrid, knowledge intensive manufacturing in the future, where humans and robots operate in close co-operation.
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
Selective automation and skill transfer in medical robotics: a demonstration on surgical knot-tying.
Knoll, Alois; Mayer, Hermann; Staub, Christoph; Bauernschmitt, Robert
2012-12-01
Transferring non-trivial human manipulation skills to robot systems is a challenging task. There have been a number of attempts to design research systems for skill transfer, but the level of the complexity of the actual skills transferable to the robot was rather limited, and delicate operations requiring a high dexterity and long action sequences with many sub-operations were impossible to transfer. A novel approach to human-machine skill transfer for multi-arm robot systems is presented. The methodology capitalizes on the metaphor of 'scaffolded learning', which has gained widespread acceptance in psychology. The main idea is to formalize the superior knowledge of a teacher in a certain way to generate support for a trainee. In our case, the scaffolding is constituted by abstract patterns, which facilitate the structuring and segmentation of information during 'learning by demonstration'. The actual skill generalization is then based on simulating fluid dynamics. The approach has been successfully evaluated in the medical domain for the delicate task of automated knot-tying for suturing with standard surgical instruments and a realistic minimally invasive robotic surgery system. Copyright © 2012 John Wiley & Sons, Ltd.
[Surgical robotics, short state of the art and prospects].
Gravez, P
2003-11-01
State-of-the-art robotized systems developed for surgery are either remotely controlled manipulators that duplicate gestures made by the surgeon (endoscopic surgery applications), or automated robots that execute trajectories defined relatively to pre-operative medical imaging (neurosurgery and orthopaedic surgery). This generation of systems primarily applies existing robotics technologies (the remote handling systems and the so-called "industrial robots") to current surgical practices. It has contributed to validate the huge potential of surgical robotics, but it suffers from several drawbacks, mainly high costs, excessive dimensions and some lack of user-friendliness. Nevertheless, technological progress let us anticipate the appearance in the near future of miniaturised surgical robots able to assist the gesture of the surgeon and to enhance his perception of the operation at hand. Due to many in-the-body articulated links, these systems will have the capability to perform complex minimally invasive gestures without obstructing the operating theatre. They will also combine the facility of manual piloting with the accuracy and increased safety of computer control, guiding the gestures of the human without offending to his freedom of action. Lastly, they will allow the surgeon to feel the mechanical properties of the tissues he is operating through a genuine "remote palpation" function. Most probably, such technological evolutions will lead the way to redesigned surgical procedures taking place inside new operating rooms featuring a better integration of all equipments and favouring cooperative work from multidisciplinary and sometimes geographically distributed medical staff.
Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot
Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.
2014-01-01
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350
Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution.
Boteanu, Adrian; St Clair, Aaron; Mohseni-Kabir, Anahita; Saldanha, Carl; Chernova, Sonia
2016-12-01
This work seeks to leverage semantic networks containing millions of entries encoding assertions of commonsense knowledge to enable improvements in robot task execution and learning. The specific application we explore in this project is object substitution in the context of task adaptation. Humans easily adapt their plans to compensate for missing items in day-to-day tasks, substituting a wrap for bread when making a sandwich, or stirring pasta with a fork when out of spoons. Robot plan execution, however, is far less robust, with missing objects typically leading to failure if the robot is not aware of alternatives. In this article, we contribute a context-aware algorithm that leverages the linguistic information embedded in the task description to identify candidate substitution objects without reliance on explicit object affordance information. Specifically, we show that the task context provided by the task labels within the action structure of a task plan can be leveraged to disambiguate information within a noisy large-scale semantic network containing hundreds of potential object candidates to identify successful object substitutions with high accuracy. We present two extensive evaluations of our work on both abstract and real-world robot tasks, showing that the substitutions made by our system are valid, accepted by users, and lead to a statistically significant reduction in robot learning time. In addition, we report the outcomes of testing our approach with a large number of crowd workers interacting with a robot in real time.
Physics-Based Robot Motion Planning in Dynamic Multi-Body Environments
2010-05-10
be actuated by external influences and interactions, such as being carried or pushed. Foreign-controlled bodies are actively actuated, but by external...from the action space A. How this action is generated can strongly influence the overall behavior and performance of our planner and will be discussed in...evolving game-state and unpredictable player -input), an animator cannot manually adjust these controls in advance. The planning approaches introduced in
Boos, Beverly; Kimel, Sasha Y.; Obaidi, Milan; Shani, Maor; Thomsen, Lotte
2018-01-01
Humans are a coalitional, parochial species. Yet, extreme actions of solidarity are sometimes taken for distant or unrelated groups. What motivates people to become solidary with groups to which they do not belong originally? Here, we demonstrate that such distant solidarity can occur when the perceived treatment of an out-group clashes with one’s political beliefs (e.g., for Leftists, oppressive occupation of the out-group) and that it is driven by fusion (or a feeling of oneness) with distant others with whom one does not share any common social category such as nationality, ethnicity or religion. In Study 1, being politically Leftist predicted European-Americans’ willingness to engage in extreme protest on behalf of Palestinians, which was mediated by fusion with the out-group. Next, in Study 2, we examined whether this pattern was moderated by out-group type. Here, Norwegian Leftists fused more with Palestinians (i.e., a group that, in the Norwegian context, is perceived to be occupied in an asymmetrical conflict) rather than Kurds (i.e., a group for which this perception is less salient). In Study 3, we experimentally tested the underlying mechanism by framing the Kurdish conflict in terms of an asymmetrical occupation (vs. symmetrical war or control conditions) and found that this increased Leftist European-Americans’ fusion with Kurds. Finally, in Study 4, we used a unique sample of non-Kurdish aspiring foreign fighters who were in the process of joining the Kurdish militia YPG. Here, fusion with the out-group predicted a greater likelihood to join and support the Kurdish forces in their fight against ISIS, insofar as respondents experienced that their political orientation morally compelled them to do so (Study 4). Together, our findings suggest that politically motivated fusion with out-groups underpins the extreme solidary action people may take on behalf of distant out-groups. Implications for future theory and research are discussed. PMID:29304156
Counterfactual quantum erasure: spooky action without entanglement
2018-01-01
We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice’s photon, dramatically restoring interference—without previously shared entanglement, and without Alice’s photon ever leaving her laboratory. PMID:29515845
Knowledge Transfer between Two Geographically Distant Action Research Teams
ERIC Educational Resources Information Center
Desmarais, Lise; Parent, Robert; Leclerc, Louise; Raymond, Lysanne; MacKinnon, Scott; Vezina, Nicole
2009-01-01
Purpose: The objective of this study is to observe and document the transfer of a train the trainers program in knife sharpening and steeling. This knowledge transfer involved two groups of researchers: the experts and the learners. These groups are from geographically dispersed regions and evolve in distinct contexts by their language and…
Path Planning for Non-Circular, Non-Holonomic Robots in Highly Cluttered Environments.
Samaniego, Ricardo; Lopez, Joaquin; Vazquez, Fernando
2017-08-15
This paper presents an algorithm for finding a solution to the problem of planning a feasible path for a slender autonomous mobile robot in a large and cluttered environment. The presented approach is based on performing a graph search on a kinodynamic-feasible lattice state space of high resolution; however, the technique is applicable to many search algorithms. With the purpose of allowing the algorithm to consider paths that take the robot through narrow passes and close to obstacles, high resolutions are used for the lattice space and the control set. This introduces new challenges because one of the most computationally expensive parts of path search based planning algorithms is calculating the cost of each one of the actions or steps that could potentially be part of the trajectory. The reason for this is that the evaluation of each one of these actions involves convolving the robot's footprint with a portion of a local map to evaluate the possibility of a collision, an operation that grows exponentially as the resolution is increased. The novel approach presented here reduces the need for these convolutions by using a set of offline precomputed maps that are updated, by means of a partial convolution, as new information arrives from sensors or other sources. Not only does this improve run-time performance, but it also provides support for dynamic search in changing environments. A set of alternative fast convolution methods are also proposed, depending on whether the environment is cluttered with obstacles or not. Finally, we provide both theoretical and experimental results from different experiments and applications.
Functional Contour-following via Haptic Perception and Reinforcement Learning.
Hellman, Randall B; Tekin, Cem; van der Schaar, Mihaela; Santos, Veronica J
2018-01-01
Many tasks involve the fine manipulation of objects despite limited visual feedback. In such scenarios, tactile and proprioceptive feedback can be leveraged for task completion. We present an approach for real-time haptic perception and decision-making for a haptics-driven, functional contour-following task: the closure of a ziplock bag. This task is challenging for robots because the bag is deformable, transparent, and visually occluded by artificial fingertip sensors that are also compliant. A deep neural net classifier was trained to estimate the state of a zipper within a robot's pinch grasp. A Contextual Multi-Armed Bandit (C-MAB) reinforcement learning algorithm was implemented to maximize cumulative rewards by balancing exploration versus exploitation of the state-action space. The C-MAB learner outperformed a benchmark Q-learner by more efficiently exploring the state-action space while learning a hard-to-code task. The learned C-MAB policy was tested with novel ziplock bag scenarios and contours (wire, rope). Importantly, this work contributes to the development of reinforcement learning approaches that account for limited resources such as hardware life and researcher time. As robots are used to perform complex, physically interactive tasks in unstructured or unmodeled environments, it becomes important to develop methods that enable efficient and effective learning with physical testbeds.
Feys, Peter; Coninx, Karin; Kerkhofs, Lore; De Weyer, Tom; Truyens, Veronik; Maris, Anneleen; Lamers, Ilse
2015-07-23
Despite the functional impact of upper limb dysfunction in multiple sclerosis (MS), effects of intensive exercise programs and specifically robot-supported training have been rarely investigated in persons with advanced MS. To investigate the effects of additional robot-supported upper limb training in persons with MS compared to conventional treatment only. Seventeen persons with MS (pwMS) (median Expanded Disability Status Scale of 8, range 3.5-8.5) were included in a pilot RCT comparing the effects of additional robot-supported training to conventional treatment only. Additional training consisted of 3 weekly sessions of 30 min interacting with the HapticMaster robot within an individualised virtual learning environment (I-TRAVLE). Clinical measures at body function (Hand grip strength, Motricity Index, Fugl-Meyer) and activity (Action Research Arm test, Motor Activity Log) level were administered before and after an intervention period of 8 weeks. The intervention group were also evaluated on robot-mediated movement tasks in three dimensions, providing active range of motion, movement duration and speed and hand-path ratio as indication of movement efficiency in the spatial domain. Non-parametric statistics were applied. PwMS commented favourably on the robot-supported virtual learning environment and reported functional training effects in daily life. Movement tasks in three dimensions, measured with the robot, were performed in less time and for the transporting and reaching movement tasks more efficiently. There were however no significant changes for any clinical measure in neither intervention nor control group although observational analyses of the included cases indicated large improvements on the Fugl-Meyer in persons with more marked upper limb dysfunction. Robot-supported training lead to more efficient movement execution which was however, on group level, not reflected by significant changes on standard clinical tests. Persons with more marked upper limb dysfunction may benefit most from additional robot-supported training, but larger studies are needed. This trial is registered within the registry Clinical Trials GOV ( NCT02257606 ).
Comparison of Human and Humanoid Robot Control of Upright Stance
Peterka, Robert J.
2009-01-01
There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to ~1 Hz) dynamic characteristics of human stance control. These subsystems are 1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions, and 2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions were humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different. PMID:19665564
Comparison of human and humanoid robot control of upright stance.
Peterka, Robert J
2009-01-01
There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.
IRA: Intrusion - Reaction - Appats
2004-11-01
deuxième phase fut de mettre en œuvre un IDS au sein d’une plate-forme de tests afin d’en présenter les différents aspects. Les trois ...local et / ou distant vers le Responsable SSI 4 / Analyse 3.2.2 Modes d’action des IDS Trois modes d’actions sont caractéristiques des outils de...et la capture des informations. La figure suivante montre ce que pourrait être un honeynet. Trois réseaux séparés par un pare-feu : l’Internet
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Autonomous Shepherding Behaviors of Multiple Target Steering Robots
Lee, Wonki; Kim, DaeEun
2017-01-01
This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots’ position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach. PMID:29186836
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
Zawadzki, Marek; Krzystek-Korpacka, Malgorzata; Gamian, Andrzej; Witkiewicz, Wojciech
2017-03-01
Robotic colorectal surgery continues to rise in popularity, but there remains little evidence on the stress response following the procedure. The aim of this study was to evaluate the inflammatory response to robotic colorectal surgery and compare it with the response generated by open colorectal surgery. This was a prospective nonrandomized comparative study involving 61 patients with colorectal cancer. The evaluation of inflammatory response to either robotic or open colorectal surgery was expressed as changes in interleukin-1β, interleukin-1 receptor antagonist, interleukin-6, tumor necrosis factor-α, C-reactive protein, and procalcitonin during the first three postoperative days. Of the 61 patients, 33 underwent robotic colorectal surgery while 28 had open colorectal surgery. Groups were comparable with respect to age, sex, BMI, cancer stage, and type of resection. The relative increase of interleukin-1 receptor antagonist at 8 h postoperative, compared to baseline, was higher in the open group (P = 0.006). The decrease of interleukin-1 receptor antagonist on postoperative days 1 and 3, compared to the maximum at 8 h, was more pronounced in the open group than in the robotic group (P = 0.008, P = 0.006, respectively), and the relative increase of interleukin-6 at 8 h after incision was higher in the open group (P = 0.007). The relative increase of procalcitonin on postoperative days 1 and 3 was higher in the open group than the robotic group (P < 0.001, P = 0.004, respectively). This study shows that when compared with open colorectal surgery, robotic colorectal surgery results in a less pronounced inflammatory response and more pronounced anti-inflammatory action.
Heterogeneous Multi-Robot Cooperation
1994-02-01
1992a) Maja Mataric. Designing emergent behaviors: From local interac- tions to collective intelligence. In J. Meyer, H. Roitblat , and S. Wilson, editors...1992] Lynne E. Parker. Adaptive action selection for cooperative agent teams. In Jean-Arcady Meyer, Herbert Roitblat . and Stewart Wilson. editors
Productive Information Foraging
NASA Technical Reports Server (NTRS)
Furlong, P. Michael; Dille, Michael
2016-01-01
This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective of the algorithm is to free robot scientists from extensive preliminary site investigation while still being able to collect meaningful data. We simulate a common form of exploration task for an autonomous robot involving sampling the environment at various locations and compare performance with a simpler existing algorithm that is also denied global information. The result of the experiment shows that the new algorithm has a statistically significant improvement in performance with a significant effect size for a range of costs for taking sampling actions.
NASA Technical Reports Server (NTRS)
Curtis, Steven A.
2010-01-01
A proposed mobile robot, denoted the amorphous rover, would vary its own size and shape in order to traverse terrain by means of rolling and/or slithering action. The amorphous rover was conceived as a robust, lightweight alternative to the wheeled rover-class robotic vehicle heretofore used in exploration of Mars. Unlike a wheeled rover, the amorphous rover would not have a predefined front, back, top, bottom, or sides. Hence, maneuvering of the amorphous rover would be more robust: the amorphous rover would not be vulnerable to overturning, could move backward or sideways as well as forward, and could even narrow itself to squeeze through small openings.
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
Intelligent robot trends and predictions for the .net future
NASA Astrophysics Data System (ADS)
Hall, Ernest L.
2001-10-01
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent and future technical and economic trends. During the past twenty years the use of industrial robots that are equipped not only with precise motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. Intelligent robot products have been developed in many cases for factory automation and for some hospital and home applications. To reach an even higher degree of applications, the addition of learning may be required. Recently, learning theories such as the adaptive critic have been proposed. In this type of learning, a critic provides a grade to the controller of an action module such as a robot. The adaptive critic is a good model for human learning. In general, the critic may be considered to be the human with the teach pendant, plant manager, line supervisor, quality inspector or the consumer. If the ultimate critic is the consumer, then the quality inspector must model the consumer's decision-making process and use this model in the design and manufacturing operations. Can the adaptive critic be used to advance intelligent robots? Intelligent robots have historically taken decades to be developed and reduced to practice. Methods for speeding this development include technology such as rapid prototyping and product development and government, industry and university cooperation.
Design of Biomedical Robots for Phenotype Prediction Problems
deAndrés-Galiana, Enrique J.; Sonis, Stephen T.
2016-01-01
Abstract Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem. PMID:27347715
Design of Biomedical Robots for Phenotype Prediction Problems.
deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan Luis; Sonis, Stephen T
2016-08-01
Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem.
NASA Astrophysics Data System (ADS)
Patkin, M. L.; Rogachev, G. N.
2018-02-01
A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.
Squire, P N; Parasuraman, R
2010-08-01
The present study assessed the impact of task load and level of automation (LOA) on task switching in participants supervising a team of four or eight semi-autonomous robots in a simulated 'capture the flag' game. Participants were faster to perform the same task than when they chose to switch between different task actions. They also took longer to switch between different tasks when supervising the robots at a high compared to a low LOA. Task load, as manipulated by the number of robots to be supervised, did not influence switch costs. The results suggest that the design of future unmanned vehicle (UV) systems should take into account not simply how many UVs an operator can supervise, but also the impact of LOA and task operations on task switching during supervision of multiple UVs. The findings of this study are relevant for the ergonomics practice of UV systems. This research extends the cognitive theory of task switching to inform the design of UV systems and results show that switching between UVs is an important factor to consider.
High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning
Sun, Wen; Patil, Sachin; Alterovitz, Ron
2015-01-01
As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645
Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz
2016-07-01
Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.
Titan: a distant but enticing destination for human visitors.
Nott, Julian
2009-10-01
Until recently, very little was known about Saturn's largest satellite, Titan. But that has changed dramatically since the Cassini spacecraft started orbiting in the Saturn system in 2004. Larger than Mercury and with a dense atmosphere, Titan has many of the characteristics of a planet. Indeed, many scientists now see it as the most interesting place in the Solar System for robotic exploration, with many unique features and even the possibility of exotic forms of life. This paper points out that Titan is also a potential destination for humans. With its predominantly nitrogen atmosphere, moderate gravity, and available water and oxygen, it also appears that, once it becomes possible to travel there, it will prove to be much more hospitable for human visitors than any other destination in the Solar System.
Rendezvous and Docking Strategy for Crewed Segment of the Asteroid Redirect Mission
NASA Technical Reports Server (NTRS)
Hinkel, Heather D.; Cryan, Scott P.; D'Souza, Christopher; Dannemiller, David P.; Brazzel, Jack P.; Condon, Gerald L.; Othon, William L.; Williams, Jacob
2014-01-01
This paper will describe the overall rendezvous, proximity operations and docking (RPOD) strategy in support of the Asteroid Redirect Crewed Mission (ARCM), as part of the Asteroid Redirect Mission (ARM). The focus of the paper is on the crewed mission phase of ARM, starting with the establishment of Orion in the Distant Retrograde Orbit (DRO) and ending with docking to the Asteroid Redirect Vechicle (ARV). The paper will detail the sequence of maneuvers required to execute the rendezvous and proximity operations mission phases along with the on-board navigation strategies, including the final approach phase. The trajectories to be considered will include target vehicles in a DRO. The paper will also discuss the sensor requirements for rendezvous and docking and the various trade studies associated with the final sensor selection. Building on the sensor requirements and trade studies, the paper will include a candidate sensor concept of operations, which will drive the selection of the sensor suite; concurrently, it will be driven by higher level requirements on the system, such as crew timeline constraints and vehicle consummables. This paper will address how many of the seemingly competing requirements will have to be addressed to create a complete system and system design. The objective is to determine a sensor suite and trajectories that enable Orion to successfully rendezvous and dock with a target vehicle in trans lunar space. Finally, the paper will report on the status of a NASA action to look for synergy within RPOD, across the crewed and robotic asteroid missions.
Navigating a Mobile Robot Across Terrain Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Howard, Ayanna; Bon, Bruce
2003-01-01
A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.
Socially assistive robotics for stroke and mild TBI rehabilitation.
Matarić, Maja; Tapus, Adriana; Winstein, Carolee; Eriksson, Jon
2009-01-01
This paper describes an interdisciplinary research project aimed at developing and evaluating effective and user-friendly non-contact robot-assisted therapy, aimed at in-home use. The approach stems from the emerging field of social cognitive neuroscience that seeks to understand phenomena in terms of interactions between the social, cognitive, and neural levels of analysis. This technology-assisted therapy is designed to be safe and affordable, and relies on novel human-robot interaction methods for accelerated recovery of upper-extremity function after lesion-induced hemiparesis. The work is based on the combined expertise in the science and technology of non-contact socially assistive robotics and the clinical science of neurorehabilitation and motor learning, brought together to study how to best enhance recovery after stroke and mild traumatic brain injury. Our approach is original and promising in that it combines several ingredients that individually have been shown to be important for learning and long-term efficacy in motor neurorehabilitation: (1) intensity of task specific training and (2) engagement and self-management of goal-directed actions. These principles motivate and guide the strategies used to develop novel user activity sensing and provide the rationale for development of socially assistive robotics therapy for monitoring and coaching users toward personalized and optimal rehabilitation programs.
NASA Technical Reports Server (NTRS)
Lupisella, Mark L.; Mueller, Thomas
2016-01-01
This paper will provide a summary and analysis of the SpaceOps 2015 Workshop all-day session on "Advanced Technologies for Robotic Exploration, Leading to Human Exploration", held at Fucino Space Center, Italy on June 12th, 2015. The session was primarily intended to explore how robotic missions and robotics technologies more generally can help lead to human exploration missions. The session included a wide range of presentations that were roughly grouped into (1) broader background, conceptual, and high-level operations concepts presentations such as the International Space Exploration Coordination Group Roadmap, followed by (2) more detailed narrower presentations such as rover autonomy and communications. The broader presentations helped to provide context and specific technical hooks, and helped lay a foundation for the narrower presentations on more specific challenges and technologies, as well as for the discussion that followed. The discussion that followed the presentations touched on key questions, themes, actions and potential international collaboration opportunities. Some of the themes that were touched on were (1) multi-agent systems, (2) decentralized command and control, (3) autonomy, (4) low-latency teleoperations, (5) science operations, (6) communications, (7) technology pull vs. technology push, and (8) the roles and challenges of operations in early human architecture and mission concept formulation. A number of potential action items resulted from the workshop session, including: (1) using CCSDS as a further collaboration mechanism for human mission operations, (2) making further contact with subject matter experts, (3) initiating informal collaborative efforts to allow for rapid and efficient implementation, and (4) exploring how SpaceOps can support collaboration and information exchange with human exploration efforts. This paper will summarize the session and provide an overview of the above subjects as they emerged from the SpaceOps 2015 Workshop session.
NASA Astrophysics Data System (ADS)
Long, K.
2017-12-01
The ability to send a space probe beyond the Voyager probes, through the interstellar medium and towardsthe distant stars, has long been the ambition of both the science ction literature but also a small community ofadvocates that have argued for a broader and deeper vision of space exploration that goes outside of our SolarSystem. In this paper we discuss some of the historical interstellar probe concepts which are propelled usingdierent types of propulsion technology, from energetic reaction engines to directed energy beaming, and considerthe payload mass associated with such concepts. We compare and contrast the dierent design concepts, payloadmass fractions, powers and energies and discuss the implications for robotic space exploration within the stellarneighbourhood. Finally, we consider the Breakthrough Starshot initiative, which proposes to send a Gram-scalelaser driven spacecraft to the Alpha Centauri system in a 20 year mission travelling at v 0.2c. We show howthis is a good start in pushing our robotic probes towards interstellar destinations, but also discuss the potentialfor scaling up this systems architecture to missions closer at home, or higher mass missions wider aeld. This is apresentation for the American Geophysical Union at the AGU Fall meeting, New Orleans, 11-15 December 2017,Special Session on the Interstellar Probe Missions.Keywords: Interstellar Probe, Breakthrough Starshot
Worrall, Douglas M; Brant, Jason A; Chai, Raymond L; Weinstein, Gregory S
2015-01-01
Cribriform adenocarcinoma of the tongue and minor salivary gland (CATMSG) is a rare, locally invasive, and poorly recognized tumor, typically occurring on the base of the tongue. This case report describes the previously unreported use of transoral robotic surgery (TORS) for the local resection of CATMSG in a novel location, the palatine tonsil, and leverages follow-up information to compare TORS to conventional surgical approaches. We performed transoral radical tonsillectomy, limited pharyngectomy, and base-of-tongue resection with staged left selective neck dissection. Tumor pathology revealed an infiltrating salivary gland carcinoma with perineural invasion and a histologically similar adenocarcinoma in 1 of 64 left neck lymph nodes. TORS was performed with no perioperative complications, and the patient was subsequently discharge on postoperative day 3 with a Dobhoff tube. Postoperatively, the Dobhoff tube was removed at 1 month, the patient was advanced to soft foods by mouth at 2 months, and 3-month positron emission tomography-computed tomography scan showed no evidence of distant metastases and evolving postsurgical changes in the left tonsillectomy bed. This case report highlights the use of TORS resection with minimal acute and long-term morbidity compared to conventional approaches for the resection of this rare, locally invasive salivary gland carcinoma in the palatine tonsil. © 2015 S. Karger AG, Basel.
Intra-hospital use of a telepathology system.
Ongürü, O; Celasun, B
2000-01-01
Utilization of telepathology systems to cover distant geographical areas has increased recently. However, the potential usefulness of similar systems for closer distances does not seem to be widely appreciated. In this study, we present data on the use of a simple telepathology system connecting the pathology department and the intra-operative consultation room within the operating theaters of the hospital. Ninety-eight frozen section cases from a past period have been re-evaluated using a real-time setup. Forty-eight of the cases have been re-evaluated in the customary fashion; allowing both ends to communicate and cooperate freely. Fifty of the cases, however, were evaluated by the consultant while the operating room end behaved like a robot; moving the stage of the microscope, changing and focusing the objectives. The deferral rate was lower than the original frozen section evaluations. Overall, the sensitivity was 100%, specificity 98%, negative predictive value 96, 5% and positive predictive value 100%. No significant difference was found for the diagnostic performances between the cooperative and robotic simulation methods.Our results strengthen the belief that telepathology is a valuable tool in offering pathology services to remote areas. The far side of a hospital building can also be a remote area and a low cost system can be helpful for intraoperative consultations. Educational value of such a system is also commendable.
An ecological evaluation of the metabolic benefits due to robot-assisted gait training.
Peri, E; Biffi, E; Maghini, C; Marzorati, M; Diella, E; Pedrocchi, A; Turconi, A C; Reni, G
2015-08-01
Cerebral palsy (CP), one of the most common neurological disorders in childhood, features affected individual's motor skills and muscle actions. This results in elevated heart rate and rate of oxygen uptake during sub-maximal exercise, thus indicating a mean energy expenditure higher than healthy subjects. Rehabilitation, currently involving also robot-based devices, may have an impact also on these aspects. In this study, an ecological setting has been proposed to evaluate the energy expenditure of 4 children with CP before and after a robot-assisted gait training. Even if the small sample size makes it difficult to give general indications, results presented here are promising. Indeed, children showed an increasing trend of the energy expenditure per minute and a decreasing trend of the energy expenditure per step, in accordance to the control group. These data suggest a metabolic benefit of the treatment that may increase the locomotion efficiency of disabled children.
Exploration of Planetary Terrains with a Legged Robot as a Scout Adjunct to a Rover
NASA Technical Reports Server (NTRS)
Colombano, Silvano; Kirchner, Frank; Spenneberg, Dirk; Hanratty, James
2004-01-01
The Scorpion robot is an innovative, biologically inspired 8-legged walking robot. It currently runs a novel approach to control which utilizes a central pattern generator (CPG) and local reflex action for each leg. From this starting point we are proposing to both extend the system's individual capabilities and its capacity to function as a "scout", cooperating with a larger wheeled rover. For this purpose we propose to develop a distributed system architecture that extends the system's capabilities both in the direction of high level planning and execution in collaboration with a rover, and in the direction of force-feedback based low level behaviors that will greatly enhance its ability to walk and climb in rough varied terrains. The final test of this improved ability will be a rappelling experiment where the Scorpion explores a steep cliff side in cooperation with a rover that serves as both anchor and planner/executive.
Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.
Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela
2015-12-01
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Anticipation as a Strategy: A Design Paradigm for Robotics
NASA Astrophysics Data System (ADS)
Williams, Mary-Anne; Gärdenfors, Peter; Johnston, Benjamin; Wightwick, Glenn
Anticipation plays a crucial role during any action, particularly in agents operating in open, complex and dynamic environments. In this paper we consider the role of anticipation as a strategy from a design perspective. Anticipation is a crucial skill in sporting games like soccer, tennis and cricket. We explore the role of anticipation in robot soccer matches in the context of reaching the RoboCup vision to develop a robot soccer team capable of defeating the FIFA World Champions in 2050. Anticipation in soccer can be planned or emergent but whether planned or emergent, anticipation can be designed. Two key obstacles stand in the way of developing more anticipatory robot systems; an impoverished understanding of the "anticipation" process/capability and a lack of know-how in the design of anticipatory systems. Several teams at RoboCup have developed remarkable preemptive behaviors. The CMU Dive and UTS Dodge are two compelling examples. In this paper we take steps towards designing robots that can adopt anticipatory behaviors by proposing an innovative model of anticipation as a strategy that specifies the key characteristics of anticipation behaviors to be developed. The model can drive the design of autonomous systems by providing a means to explore and to represent anticipation requirements. Our approach is to analyze anticipation as a strategy and then to use the insights obtained to design a reference model that can be used to specify a set of anticipatory requirements for guiding an autonomous robot soccer system.
Integrating planning perception and action for informed object search.
Manso, Luis J; Gutierrez, Marco A; Bustos, Pablo; Bachiller, Pilar
2018-05-01
This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.
Light Robots: Bridging the Gap between Microrobotics and Photomechanics in Soft Materials.
Zeng, Hao; Wasylczyk, Piotr; Wiersma, Diederik S; Priimagi, Arri
2018-06-01
For decades, roboticists have focused their efforts on rigid systems that enable programmable, automated action, and sophisticated control with maximal movement precision and speed. Meanwhile, material scientists have sought compounds and fabrication strategies to devise polymeric actuators that are small, soft, adaptive, and stimuli-responsive. Merging these two fields has given birth to a new class of devices-soft microrobots that, by combining concepts from microrobotics and stimuli-responsive materials research, provide several advantages in a miniature form: external, remotely controllable power supply, adaptive motion, and human-friendly interaction, with device design and action often inspired by biological systems. Herein, recent progress in soft microrobotics is highlighted based on light-responsive liquid-crystal elastomers and polymer networks, focusing on photomobile devices such as walkers, swimmers, and mechanical oscillators, which may ultimately lead to flying microrobots. Finally, self-regulated actuation is proposed as a new pathway toward fully autonomous, intelligent light robots of the future. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multisensory architectures for action-oriented perception
NASA Astrophysics Data System (ADS)
Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.
2007-05-01
In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.
Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali
2015-08-01
In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.
Effects of Radiation on Metastasis and Tumor Cell Migration
Vilalta, Marta; Rafat, Marjan; Graves, Edward E.
2016-01-01
It is well known that tumor cells migrate from the primary lesion to distant sites to form metastases and that these lesions limit patient outcome in a majority of cases. However the extent to which radiation influences this process and to which migration in turn alters radiation response remains controversial. There are preclinical and clinical reports showing that focal radiotherapy can both increase the development of distant metastasis, as well as that it can induce the regression of established metastases through the abscopal effect. More recently, preclinical studies have suggested that radiation can attract migrating tumor cells and may thereby facilitate tumor recurrence. In this review, we summarize these phenomena and their potential mechanisms of action, and evaluate their significance for modern radiation therapy strategies. PMID:27022944
Vosoughi, Aram; Smith, Paul Taylor; Zeitouni, Joseph A; Sodeman, Gregori M; Jorda, Merce; Gomez-Fernandez, Carmen; Garcia-Buitrago, Monica; Petito, Carol K; Chapman, Jennifer R; Campuzano-Zuluaga, German; Rosenberg, Andrew E; Kryvenko, Oleksandr N
2018-04-30
Frozen section telepathology interpretation experience has been largely limited to practices with locations significantly distant from one another with sporadic need for frozen section diagnosis. In 2010 we established a real-time non-robotic telepathology system in a very active cancer center for daily frozen section service. Herein, we evaluate its accuracy compared to direct microscopic interpretation performed in the main hospital by the same faculty and its cost-efficiency over a 1-year period. From 643 (1416 parts) cases requiring intraoperative consultation, 333 cases (690 parts) were examined by telepathology and 310 cases (726 parts) by direct microscopy. Corresponding discrepancy rates were 2.6% (18 cases: 6 (0.9%) sampling and 12 (1.7%) diagnostic errors) and 3.2% (23 cases: 8 (1.1%) sampling and 15 (2.1%) diagnostic errors), P=.63. The sensitivity and specificity of intraoperative frozen diagnosis were 0.92 and 0.99, respectively, in telepathology, and 0.90 and 0.99, respectively, in direct microscopy. There was no correlation of error incidence with post graduate year level of residents involved in the telepathology service. Cost analysis indicated that the time saved by telepathology was $19691 over one year of the study period while the capital cost for establishing the system was $8924. Thus, real-time non-robotic telepathology is a reliable and easy to use tool for frozen section evaluation in busy clinical settings, especially when frozen section service involves more than one hospital, and it is cost efficient when travel is a component of the service. Copyright © 2018. Published by Elsevier Inc.
Neskey, David M; Osman, Abdullah A; Ow, Thomas J; Katsonis, Panagiotis; McDonald, Thomas; Hicks, Stephanie C; Hsu, Teng-Kuei; Pickering, Curtis R; Ward, Alexandra; Patel, Ameeta; Yordy, John S; Skinner, Heath D; Giri, Uma; Sano, Daisuke; Story, Michael D; Beadle, Beth M; El-Naggar, Adel K; Kies, Merrill S; William, William N; Caulin, Carlos; Frederick, Mitchell; Kimmel, Marek; Myers, Jeffrey N; Lichtarge, Olivier
2015-04-01
TP53 is the most frequently altered gene in head and neck squamous cell carcinoma, with mutations occurring in over two-thirds of cases, but the prognostic significance of these mutations remains elusive. In the current study, we evaluated a novel computational approach termed evolutionary action (EAp53) to stratify patients with tumors harboring TP53 mutations as high or low risk, and validated this system in both in vivo and in vitro models. Patients with high-risk TP53 mutations had the poorest survival outcomes and the shortest time to the development of distant metastases. Tumor cells expressing high-risk TP53 mutations were more invasive and tumorigenic and they exhibited a higher incidence of lung metastases. We also documented an association between the presence of high-risk mutations and decreased expression of TP53 target genes, highlighting key cellular pathways that are likely to be dysregulated by this subset of p53 mutations that confer particularly aggressive tumor behavior. Overall, our work validated EAp53 as a novel computational tool that may be useful in clinical prognosis of tumors harboring p53 mutations. ©2015 American Association for Cancer Research.
Neskey, David M.; Osman, Abdullah A.; Ow, Thomas J.; Katsonis, Panagiotis; McDonald, Thomas; Hicks, Stephanie C.; Hsu, Teng-Kuei; Pickering, Curtis R.; Ward, Alexandra; Patel, Ameeta; Yordy, John S.; Skinner, Heath D.; Giri, Uma; Sano, Daisuke; Story, Michael D.; Beadle, Beth M.; El-Naggar, Adel K.; Kies, Merrill S.; William, William N.; Caulin, Carlos; Frederick, Mitchell; Kimmel, Marek; Myers, Jeffrey N.; Lichtarge, Olivier
2015-01-01
TP53 is the most frequently altered gene in head and neck squamous cell carcinoma (HNSCC) with mutations occurring in over two third of cases, but the prognostic significance of these mutations remains elusive. In the current study, we evaluated a novel computational approach termed Evolutionary Action (EAp53) to stratify patients with tumors harboring TP53 mutations as high or low risk, and validated this system in both in vivo and in vitro models. Patients with high risk TP53 mutations had the poorest survival outcomes and the shortest time to the development of distant metastases. Tumor cells expressing high risk TP53 mutations were more invasive and tumorigenic and they exhibited a higher incidence of lung metastases. We also documented an association between the presence of high risk mutations and decreased expression of TP53 target genes, highlighting key cellular pathways that are likely to be dysregulated by this subset of p53 mutations which confer particularly aggressive tumor behavior. Overall, our work validated EAp53 as a novel computational tool that may be useful in clinical prognosis of tumors harboring p53 mutations. PMID:25634208
Towards a model of temporal attention for on-line learning in a mobile robot
NASA Astrophysics Data System (ADS)
Marom, Yuval; Hayes, Gillian
2001-06-01
We present a simple attention system, capable of bottom-up signal detection adaptive to subjective internal needs. The system is used by a robotic agent, learning to perform phototaxis and obstacle avoidance by following a teacher agent around a simulated environment, and deciding when to form associations between perceived information and imitated actions. We refer to this kind of decision-making as on-line temporal attention. The main role of the attention system is perception of change; the system is regulated through feedback about cognitive effort. We show how different levels of effort affect both the ability to learn a task, and to execute it.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
The Interstellar Ethics of Self-Replicating Probes
NASA Astrophysics Data System (ADS)
Cooper, K.
Robotic spacecraft have been our primary means of exploring the Universe for over 50 years. Should interstellar travel become reality it seems unlikely that humankind will stop using robotic probes. These probes will be able to replicate themselves ad infinitum by extracting raw materials from the space resources around them and reconfiguring them into replicas of themselves, using technology such as 3D printing. This will create a colonising wave of probes across the Galaxy. However, such probes could have negative as well as positive consequences and it is incumbent upon us to factor self-replicating probes into our interstellar philosophies and to take responsibility for their actions.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Rong, Wei; Tong, Kai Yu; Hu, Xiao Ling; Ho, Sze Kit
2015-03-01
An electromyography-driven robot system integrated with neuromuscular electrical stimulation (NMES) was developed to investigate its effectiveness on post-stroke rehabilitation. The performance of this system in assisting finger flexion/extension with different assistance combinations was evaluated in five stroke subjects. Then, a pilot study with 20-sessions training was conducted to evaluate the training's effectiveness. The results showed that combined assistance from the NMES-robot could improve finger movement accuracy, encourage muscle activation of the finger muscles and suppress excessive muscular activities in the elbow joint. When assistances from both NMES and the robot were 50% of their maximum assistances, finger-tracking performance had the best results, with the lowest root mean square error, greater range of motion, higher voluntary muscle activations of the finger joints and lower muscle co-contraction in the finger and elbow joints. Upper limb function improved after the 20-session training, indicated by the increased clinical scores of Fugl-Meyer Assessment, Action Research Arm Test and Wolf Motor Function Test. Muscle co-contraction was reduced in the finger and elbow joints reflected by the Modified Ashworth Scale. The findings demonstrated that an electromyography-driven NMES-robot used for chronic stroke improved hand function and tracking performance. Further research is warranted to validate the method on a larger scale. Implications for Rehabilitation The hand robotics and neuromuscular electrical stimulation (NMES) techniques are still separate systems in current post-stroke hand rehabilitation. This is the first study to investigate the combined effects of the NMES and robot on hand rehabilitation. The finger tracking performance was improved with the combined assistance from the EMG-driven NMES-robot hand system. The assistance from the robot could improve the finger movement accuracy and the assistance from the NMES could reduce the muscle co-contraction on finger and elbow joints. The upper limb functions were improved on chronic stroke patients after the pilot study of 20-session hand training with the combined assistance from the EMG-driven NMES-robot. The muscle spasticity on finger and elbow joints was reduced after the training.
Kassahun, Yohannes; Yu, Bingbin; Tibebu, Abraham Temesgen; Stoyanov, Danail; Giannarou, Stamatia; Metzen, Jan Hendrik; Vander Poorten, Emmanuel
2016-04-01
Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.
ARTIE: An Integrated Environment for the Development of Affective Robot Tutors
Imbernón Cuadrado, Luis-Eduardo; Manjarrés Riesco, Ángeles; De La Paz López, Félix
2016-01-01
Over the last decade robotics has attracted a great deal of interest from teachers and researchers as a valuable educational tool from preschool to highschool levels. The implementation of social-support behaviors in robot tutors, in particular in the emotional dimension, can make a significant contribution to learning efficiency. With the aim of contributing to the rising field of affective robot tutors we have developed ARTIE (Affective Robot Tutor Integrated Environment). We offer an architectural pattern which integrates any given educational software for primary school children with a component whose function is to identify the emotional state of the students who are interacting with the software, and with the driver of a robot tutor which provides personalized emotional pedagogical support to the students. In order to support the development of affective robot tutors according to the proposed architecture, we also provide a methodology which incorporates a technique for eliciting pedagogical knowledge from teachers, and a generic development platform. This platform contains a component for identiying emotional states by analysing keyboard and mouse interaction data, and a generic affective pedagogical support component which specifies the affective educational interventions (including facial expressions, body language, tone of voice,…) in terms of BML (a Behavior Model Language for virtual agent specification) files which are translated into actions of a robot tutor. The platform and the methodology are both adapted to primary school students. Finally, we illustrate the use of this platform to build a prototype implementation of the architecture, in which the educational software is instantiated with Scratch and the robot tutor with NAO. We also report on a user experiment we carried out to orient the development of the platform and of the prototype. We conclude from our work that, in the case of primary school students, it is possible to identify, without using intrusive and expensive identification methods, the emotions which most affect the character of educational interventions. Our work also demonstrates the feasibility of a general-purpose architecture of decoupled components, in which a wide range of educational software and robot tutors can be integrated and then used according to different educational criteria. PMID:27536230
ARTIE: An Integrated Environment for the Development of Affective Robot Tutors.
Imbernón Cuadrado, Luis-Eduardo; Manjarrés Riesco, Ángeles; De La Paz López, Félix
2016-01-01
Over the last decade robotics has attracted a great deal of interest from teachers and researchers as a valuable educational tool from preschool to highschool levels. The implementation of social-support behaviors in robot tutors, in particular in the emotional dimension, can make a significant contribution to learning efficiency. With the aim of contributing to the rising field of affective robot tutors we have developed ARTIE (Affective Robot Tutor Integrated Environment). We offer an architectural pattern which integrates any given educational software for primary school children with a component whose function is to identify the emotional state of the students who are interacting with the software, and with the driver of a robot tutor which provides personalized emotional pedagogical support to the students. In order to support the development of affective robot tutors according to the proposed architecture, we also provide a methodology which incorporates a technique for eliciting pedagogical knowledge from teachers, and a generic development platform. This platform contains a component for identiying emotional states by analysing keyboard and mouse interaction data, and a generic affective pedagogical support component which specifies the affective educational interventions (including facial expressions, body language, tone of voice,…) in terms of BML (a Behavior Model Language for virtual agent specification) files which are translated into actions of a robot tutor. The platform and the methodology are both adapted to primary school students. Finally, we illustrate the use of this platform to build a prototype implementation of the architecture, in which the educational software is instantiated with Scratch and the robot tutor with NAO. We also report on a user experiment we carried out to orient the development of the platform and of the prototype. We conclude from our work that, in the case of primary school students, it is possible to identify, without using intrusive and expensive identification methods, the emotions which most affect the character of educational interventions. Our work also demonstrates the feasibility of a general-purpose architecture of decoupled components, in which a wide range of educational software and robot tutors can be integrated and then used according to different educational criteria.
Passive motion paradigm: an alternative to optimal control.
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Self-organization via active exploration in robotic applications
NASA Technical Reports Server (NTRS)
Ogmen, H.; Prakash, R. V.
1992-01-01
We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization of sequence of actions.
Altered Connectivity and Action Model Formation in Autism Is Autism
Mostofsky, Stewart H.; Ewen, Joshua B.
2014-01-01
Internal action models refer to sensory-motor programs that form the brain basis for a wide range of skilled behavior and for understanding others’ actions. Development of these action models, particularly those reliant on visual cues from the external world, depends on connectivity between distant brain regions. Studies of children with autism reveal anomalous patterns of motor learning and impaired execution of skilled motor gestures. These findings robustly correlate with measures of social and communicative function, suggesting that anomalous action model formation may contribute to impaired development of social and communicative (as well as motor) capacity in autism. Examination of the pattern of behavioral findings, as well as convergent data from neuroimaging techniques, further suggests that autism-associated action model formation may be related to abnormalities in neural connectivity, particularly decreased function of long-range connections. This line of study can lead to important advances in understanding the neural basis of autism and, more critically, can be used to guide effective therapies targeted at improving social, communicative, and motor function. PMID:21467306
Vartolomei, Mihai Dorin; Matei, Deliu Victor; Renne, Giuseppe; Tringali, Valeria Maria; Crisan, Nicolae; Musi, Gennaro; Mistretta, Francesco Alessandro; Russo, Andrea; Cozzi, Gabriele; Cordima, Giovani; Luzzago, Stefano; Cioffi, Antonio; Di Trapani, Ettore; Catellani, Michele; Delor, Maurizio; Bottero, Danilo; Imbimbo, Ciro; Mirone, Vincenzo; Ferro, Matteo; de Cobelli, Ottavio
2017-10-27
Nowadays, there is a debate about which surgical treatment should be best for clinical T1 renal tumors. If the oncological outcomes are considered, there are many open and laparoscopic series published. As far as robotic series are concerned, only a few of them report 5-yr oncological outcomes. The aim of this study was to analyze robot-assisted partial nephrectomy (RAPN) midterm oncological outcomes achieved in a tertiary robotic reference center. Between April 2009 and September 2013, 123 consecutive patients with clinical T1-stage renal masses underwent RAPN in our tertiary cancer center. Inclusion criteria were as follows: pathologically confirmed renal cell carcinomas (RCCs) and follow-up for >12 mo. Eighteen patients were excluded due to follow-up of <12 mo and 15 due to benign final pathology. Median follow-up was 59 mo (interquartile range 44-73 mo). Patients were followed according to guideline recommendations and institutional protocol. Outcomes were measured by time to disease progression, overall survival, or time to cancer-specific death. Kaplan-Meier method was used to estimate survival; log-rank tests were applied for pair-wise comparison of survival. From the 90 patients included, 66 (73.3%) had T1a, 12 (13.3%) T1b, three (3.3%) T2a, and nine (10%) T3a tumors. Predominant histological type was clear cell carcinoma: 67 (74.5%). Fuhrmann grade 1 and 2 was found in 73.3% of all malignant tumors. Two patients (2.2%) had positive surgical margins, and complication rate was 17.8%. Relapse rate was 7.7%, including two cases (2.2%) of local recurrences and five (5.5%) distant metastasis. Five-year disease-free survival was 90.9%, 5-yr cancer-specific survival was 97.5%, and 5-yr overall survival was 95.1%. Midterm oncological outcomes after RAPN for localized RCCs (predominantly T1a tumors of low anatomic complexity) were shown to be good, adding significant evidence to support the oncological efficacy and safety of RAPN for the treatment of this type of tumors. Robot-assisted partial nephrectomy seems to be the most promising minimally invasive approach in the treatment of renal masses suitable for organ-sparing surgery as midterm (5 yr) oncological outcomes are excellent. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Felli, Emanuele; Brunetti, Francesco; Disabato, Mara; Salloum, Chady; Azoulay, Daniel; De'angelis, Nicola
2014-01-01
Right colon cancer rarely presents as an emergency, in which bowel occlusion and massive bleeding are the most common clinical presentations. Although there are no definite guidelines, the first line treatment for massive right colon cancer bleeding should ideally stop the bleeding using endoscopy or interventional radiology, subsequently allowing proper tumor staging and planning of a definite treatment strategy. Minimally invasive approaches for right and left colectomy have progressively increased and are widely performed in elective settings, with laparoscopy chosen in the majority of cases. Conversely, in emergent and urgent surgeries, minimally invasive techniques are rarely performed. We report a case of an 86-year-old woman who was successfully treated for massive rectal bleeding in an urgent setting by robotic surgery (da Vinci Intuitive Surgical System®). At admission, the patient had severe anemia (Hb 6 g/dL) and hemodynamic stability. A computer tomography scanner with contrast enhancement showed a right colon cancer with active bleeding; no distant metastases were found. A colonoscopy did not show any other bowel lesion, while a constant bleeding from the right pre-stenotic colon mass was temporarily arrested by endoscopic argon coagulation. A robotic right colectomy in urgent setting (within 24 hours from admission) was indicated. A three-armed robot was used with docking in the right side of the patient and a fourth trocar for the assistant surgeon. Because of the patient's poor nutritional status, a double-barreled ileocolostomy was performed. The post-operative period was uneventful. As the neoplasia was a pT3N0 adenocarcinoma, surveillance was decided after a multidisciplinary meeting, and restoration of the intestinal continuity was performed 3 months later, once good nutritional status was achieved. In addition, we reviewed the current literature on minimally invasive colectomy performed for colon carcinoma in emergent or urgent setting. No study on robotic approach was found. Seven studies evaluating the role of laparoscopic colectomy concluded that this technique is a safe and feasible option associated with lower blood loss and shorter hospital stay. It may require longer operative time, but morbidity and mortality rates appeared comparable to open colectomy. However, the surgeon's experience and the right selection of candidate patients cannot be understated.
2015-08-28
for the scene, and effectively isolates the points on buildings. We are now able to accurately filter in buildings, and filter out the ground, but...brushing hair and hugging. Time Action running kids Agent Motion rolling ball panning camera waves crashing Figure 3: Our work distinguishes inten- tional...action of an unknown agent (the kids in this example) from various other motions, such as the rolling ball, the crashing waves and the background mo
Information-driven self-organization: the dynamical system approach to autonomous robot behavior.
Ay, Nihat; Bernigau, Holger; Der, Ralf; Prokopenko, Mikhail
2012-09-01
In recent years, information theory has come into the focus of researchers interested in the sensorimotor dynamics of both robots and living beings. One root for these approaches is the idea that living beings are information processing systems and that the optimization of these processes should be an evolutionary advantage. Apart from these more fundamental questions, there is much interest recently in the question how a robot can be equipped with an internal drive for innovation or curiosity that may serve as a drive for an open-ended, self-determined development of the robot. The success of these approaches depends essentially on the choice of a convenient measure for the information. This article studies in some detail the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process. The PI of a process quantifies the total information of past experience that can be used for predicting future events. However, the application of information theoretic measures in robotics mostly is restricted to the case of a finite, discrete state-action space. This article aims at applying the PI in the dynamical systems approach to robot control. We study linear systems as a first step and derive exact results for the PI together with explicit learning rules for the parameters of the controller. Interestingly, these learning rules are of Hebbian nature and local in the sense that the synaptic update is given by the product of activities available directly at the pertinent synaptic ports. The general findings are exemplified by a number of case studies. In particular, in a two-dimensional system, designed at mimicking embodied systems with latent oscillatory locomotion patterns, it is shown that maximizing the PI means to recognize and amplify the latent modes of the robotic system. This and many other examples show that the learning rules derived from the maximum PI principle are a versatile tool for the self-organization of behavior in complex robotic systems.
Interactive multi-objective path planning through a palette-based user interface
NASA Astrophysics Data System (ADS)
Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph
2016-05-01
n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.
Autonomous Lawnmower using FPGA implementation.
NASA Astrophysics Data System (ADS)
Ahmad, Nabihah; Lokman, Nabill bin; Helmy Abd Wahab, Mohd
2016-11-01
Nowadays, there are various types of robot have been invented for multiple purposes. The robots have the special characteristic that surpass the human ability and could operate in extreme environment which human cannot endure. In this paper, an autonomous robot is built to imitate the characteristic of a human cutting grass. A Field Programmable Gate Array (FPGA) is used to control the movements where all data and information would be processed. Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) is used to describe the hardware using Quartus II software. This robot has the ability of avoiding obstacle using ultrasonic sensor. This robot used two DC motors for its movement. It could include moving forward, backward, and turning left and right. The movement or the path of the automatic lawn mower is based on a path planning technique. Four Global Positioning System (GPS) plot are set to create a boundary. This to ensure that the lawn mower operates within the area given by user. Every action of the lawn mower is controlled by the FPGA DE' Board Cyclone II with the help of the sensor. Furthermore, Sketch Up software was used to design the structure of the lawn mower. The autonomous lawn mower was able to operate efficiently and smoothly return to coordinated paths after passing the obstacle. It uses 25% of total pins available on the board and 31% of total Digital Signal Processing (DSP) blocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allevato, Adam
2016-07-21
ROSSTEP is a system for sequentially running roslaunch, rosnode, and bash scripts automatically, for use in Robot Operating System (ROS) applications. The system consists of YAML files which define actions and conditions. A python file parses the code and runs actions sequentially using the sys and subprocess python modules. Between actions, it uses various ROS-based code to check conditions required to proceed, and only moves on to the next action when all the necessary conditions have been met. Included is rosstep-creator, a QT application designed to create the YAML files required for ROSSTEP. It has a nearly one-to-one mapping frommore » interface elements to YAML output, and serves as a convenient GUI for working with the ROSSTEP system.« less
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Godfrey, Sasha Blue; Holley, Rahsaan J; Lum, Peter S
2013-11-01
The goals of this pilot study were to quantify the clinical benefits of using the Hand Exoskeleton Rehabilitation Robot for hand rehabilitation after stroke and to determine the population best served by this intervention. Nine subjects with chronic stroke (one excluded from analysis) completed 18 sessions of training with the Hand Exoskeleton Rehabilitation Robot and a preevaluation, a postevaluation, and a 90-day clinical evaluation. Overall, the subjects improved in both range of motion and clinical measures. Compared with the preevaluation, the subjects showed significant improvements in range of motion, grip strength, and the hand component of the Fugl-Meyer (mean changes, 6.60 degrees, 8.84 percentage points, and 1.86 points, respectively). A subgroup of six subjects exhibited lower tone and received a higher dosage of training. These subjects had significant gains in grip strength, the hand component of the Fugl-Meyer, and the Action Research Arm Test (mean changes, 8.42 percentage points, 2.17 points, and 2.33 points, respectively). Future work is needed to better manage higher levels of hypertonia and provide more support to subjects with higher impairment levels; however, the current results support further study into the Hand Exoskeleton Rehabilitation Robot treatment.
Context recognition and situation assessment in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Yavnai, Arie
1993-05-01
The capability to recognize the operating context and to assess the situation in real-time is needed, if a high functionality autonomous mobile robot has to react properly and effectively to continuously changing situations and events, either external or internal, while the robot is performing its assigned tasks. A new approach and architecture for context recognition and situation assessment module (CORSA) is presented in this paper. CORSA is a multi-level information processing module which consists of adaptive decision and classification algorithms. It performs dynamic mapping from the data space to the context space, and dynamically decides on the context class. Learning mechanism is employed to update the decision variables so as to minimize the probability of misclassification. CORSA is embedded within the Mission Manager module of the intelligent autonomous hyper-controller (IAHC) of the mobile robot. The information regarding operating context, events and situation is then communicated to other modules of the IAHC where it is used to: (a) select the appropriate action strategy; (b) support the processes to arbitration and conflict resolution between reflexive behaviors and reasoning-driven behaviors; (c) predict future events and situations; and (d) determine criteria and priorities for planning, replanning, and decision making.
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors. PMID:25120464
Software for Automation of Real-Time Agents, Version 2
NASA Technical Reports Server (NTRS)
Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steve; Chouinard, Caroline; Engelhardt, Barbara; Wilklow, Colette; Mutz, Darren; Knight, Russell; Rabideau, Gregg;
2005-01-01
Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure.
Agricultural robot designed for seeding mechanism
NASA Astrophysics Data System (ADS)
Sunitha, K. A., Dr.; Suraj, G. S. G. S.; Sowrya, CH P. N.; Atchyut Sriram, G.; Shreyas, D.; Srinivas, T.
2017-05-01
In the field of agriculture, plantation begins with ploughing the land and sowing seeds. The old traditional method plough attached to an OX and tractors needs human involvement to carry the process. The driving force behind this work is to reduce the human interference in the field of agriculture and to make it cost effective. In this work, apart of the land is taken into consideration and the robot introduced localizes the path and can navigate itself without human action. For ploughing, this robot is provided with tentacles attached with saw blades. The sowing mechanism initiates with long toothed gears actuated with motors. The complete body is divided into two parts the tail part acts as a container for seeds. The successor holds on all the electronics used for automating and actuation. The locomotion is provided with wheels covered under conveyor belts. Gears at the back of the robot rotate in equal speed with respect to each other with the saw blades. For each rotation every tooth on gear will take seeds and will drop them on field. Camera at the front end tracks the path for every fixed distance and at the minimum distance it takes the path pre-programmed.
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.
From self-observation to imitation: visuomotor association on a robotic hand.
Chaminade, Thierry; Oztop, Erhan; Cheng, Gordon; Kawato, Mitsuo
2008-04-15
Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.
Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S; Winkielman, Piotr
2018-01-01
Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants' reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants' facial responses to android's expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants' responses to the game outcome were similar regardless of whether it was conveyed via the android's smile or frown. Furthermore, the outcome had greater impact on people's facial reactions when it was conveyed through android's face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions.
Review of the genus Menestheus Stål, 1868 (Hemiptera: Heteroptera: Pentatomidae).
FaÚndez, Eduardo I; Rider, David A
2018-04-10
The Aeptini (Pentatomidae: Pentatominae) genus Menestheus Stål, 1868, is redescribed. The original misidentification of the type species for Menestheus is corrected by action of first reviser herein by establishing Paramenestheus nercivus non Dallas, 1851 sensu Stal (1868) = Menestheus cuneatus Distant, 1899 as the type species. Menestheus mcphersoni sp. nov. is described and illustrated. Characters separating the two species are discussed.
NASA Technical Reports Server (NTRS)
DeMott, Diana
2013-01-01
Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.
Aeinehband, Shahin; Behbahani, Homira; Grandien, Alf; Nilsson, Bo; Ekdahl, Kristina N.; Lindblom, Rickard P. F.; Piehl, Fredrik; Darreh-Shori, Taher
2013-01-01
Acetylcholine (ACh), the classical neurotransmitter, also affects a variety of nonexcitable cells, such as endothelia, microglia, astrocytes and lymphocytes in both the nervous system and secondary lymphoid organs. Most of these cells are very distant from cholinergic synapses. The action of ACh on these distant cells is unlikely to occur through diffusion, given that ACh is very short-lived in the presence of acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE), two extremely efficient ACh-degrading enzymes abundantly present in extracellular fluids. In this study, we show compelling evidence for presence of a high concentration and activity of the ACh-synthesizing enzyme, choline-acetyltransferase (ChAT) in human cerebrospinal fluid (CSF) and plasma. We show that ChAT levels are physiologically balanced to the levels of its counteracting enzymes, AChE and BuChE in the human plasma and CSF. Equilibrium analyses show that soluble ChAT maintains a steady-state ACh level in the presence of physiological levels of fully active ACh-degrading enzymes. We show that ChAT is secreted by cultured human-brain astrocytes, and that activated spleen lymphocytes release ChAT itself rather than ACh. We further report differential CSF levels of ChAT in relation to Alzheimer’s disease risk genotypes, as well as in patients with multiple sclerosis, a chronic neuroinflammatory disease, compared to controls. Interestingly, soluble CSF ChAT levels show strong correlation with soluble complement factor levels, supporting a role in inflammatory regulation. This study provides a plausible explanation for the long-distance action of ACh through continuous renewal of ACh in extracellular fluids by the soluble ChAT and thereby maintenance of steady-state equilibrium between hydrolysis and synthesis of this ubiquitous cholinergic signal substance in the brain and peripheral compartments. These findings may have important implications for the role of cholinergic signaling in states of inflammation in general and in neurodegenerative disease, such as Alzheimer’s disease and multiple sclerosis in particular. PMID:23840379
3D hierarchical spatial representation and memory of multimodal sensory data
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.
Prell, Christina; Sun, Laixiang; Feng, Kuishuang; He, Jiaying; Hubacek, Klaus
2017-05-15
Land-use change is increasingly driven by global trade. The term "telecoupling" has been gaining ground as a means to describe how human actions in one part of the world can have spatially distant impacts on land and land-use in another. These interactions can, over time, create both direct and spatially distant feedback loops, in which human activity and land use mutually impact one another over great expanses. In this paper, we develop an analytical framework to clarify spatially distant feedbacks in the case of land use and global trade. We use an innovative mix of multi-regional input-output (MRIO) analysis and stochastic actor-oriented models (SAOMs) for analyzing the co-evolution of changes in trade network patterns with those of land use, as embodied in trade. Our results indicate that the formation of trade ties and changes in embodied land use mutually impact one another, and further, that these changes are linked to disparities in countries' wealth. Through identifying this feedback loop, our results support ongoing discussions about the unequal trade patterns between rich and poor countries that result in uneven distributions of negative environmental impacts. Finally, evidence for this feedback loop is present even when controlling for a number of underlying mechanisms, such as countries' land endowments, their geographical distance from one another, and a number of endogenous network tendencies. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-agent Reinforcement Learning Model for Effective Action Selection
NASA Astrophysics Data System (ADS)
Youk, Sang Jo; Lee, Bong Keun
Reinforcement learning is a sub area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. In the case of multi-agent, especially, which state space and action space gets very enormous in compared to single agent, so it needs to take most effective measure available select the action strategy for effective reinforcement learning. This paper proposes a multi-agent reinforcement learning model based on fuzzy inference system in order to improve learning collect speed and select an effective action in multi-agent. This paper verifies an effective action select strategy through evaluation tests based on Robocop Keep away which is one of useful test-beds for multi-agent. Our proposed model can apply to evaluate efficiency of the various intelligent multi-agents and also can apply to strategy and tactics of robot soccer system.
Signal propagation along the axon.
Rama, Sylvain; Zbili, Mickaël; Debanne, Dominique
2018-03-08
Axons link distant brain regions and are usually considered as simple transmission cables in which reliable propagation occurs once an action potential has been generated. Safe propagation of action potentials relies on specific ion channel expression at strategic points of the axon such as nodes of Ranvier or axonal branch points. However, while action potentials are generally considered as the quantum of neuronal information, their signaling is not entirely digital. In fact, both their shape and their conduction speed have been shown to be modulated by activity, leading to regulations of synaptic latency and synaptic strength. We report here newly identified mechanisms of (1) safe spike propagation along the axon, (2) compartmentalization of action potential shape in the axon, (3) analog modulation of spike-evoked synaptic transmission and (4) alteration in conduction time after persistent regulation of axon morphology in central neurons. We discuss the contribution of these regulations in information processing. Copyright © 2018 Elsevier Ltd. All rights reserved.