Sample records for robotic agent command

  1. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  2. Software for Automation of Real-Time Agents, Version 2

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steve; Chouinard, Caroline; Engelhardt, Barbara; Wilklow, Colette; Mutz, Darren; Knight, Russell; Rabideau, Gregg; hide

    2005-01-01

    Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure.

  3. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  4. Control Architecture for Robotic Agent Command and Sensing

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel

    2008-01-01

    Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).

  5. Searching Dynamic Agents with a Team of Mobile Robots

    PubMed Central

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519

  6. Searching dynamic agents with a team of mobile robots.

    PubMed

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach.

  7. Envisioning Cognitive Robots for Future Space Exploration

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Stoica, Adrian

    2010-01-01

    Cognitive robots in the context of space exploration are envisioned with advanced capabilities of model building, continuous planning/re-planning, self-diagnosis, as well as the ability to exhibit a level of 'understanding' of new situations. An overview of some JPL components (e.g. CASPER, CAMPOUT) and a description of the architecture CARACaS (Control Architecture for Robotic Agent Command and Sensing) that combines these in the context of a cognitive robotic system operating in a various scenarios are presented. Finally, two examples of typical scenarios of a multi-robot construction mission and a human-robot mission, involving direct collaboration with humans is given.

  8. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  9. Self-Organizing Map With Time-Varying Structure to Plan and Control Artificial Locomotion.

    PubMed

    Araujo, Aluizio F R; Santana, Orivaldo V

    2015-08-01

    This paper presents an algorithm, self-organizing map-state trajectory generator (SOM-STG), to plan and control legged robot locomotion. The SOM-STG is based on an SOM with a time-varying structure characterized by constructing autonomously close-state trajectories from an arbitrary number of robot postures. Each trajectory represents a cyclical movement of the limbs of an animal. The SOM-STG was designed to possess important features of a central pattern generator, such as rhythmic pattern generation, synchronization between limbs, and swapping between gaits following a single command. The acquisition of data for SOM-STG is based on learning by demonstration in which the data are obtained from different demonstrator agents. The SOM-STG can construct one or more gaits for a simulated robot with six legs, can control the robot with any of the gaits learned, and can smoothly swap gaits. In addition, SOM-STG can learn to construct a state trajectory form observing an animal in locomotion. In this paper, a dog is the demonstrator agent.

  10. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  11. Adjustably Autonomous Multi-agent Plan Execution with an Internal Spacecraft Free-Flying Robot Prototype

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Nicewarner, Keith

    2006-01-01

    We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.

  12. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  13. SLAM algorithm applied to robotics assistance for navigation in unknown environments.

    PubMed

    Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo

    2010-02-17

    The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.

  14. Cosine Kuramoto Based Distribution of a Convoy with Limit-Cycle Obstacle Avoidance Through the Use of Simulated Agents

    NASA Astrophysics Data System (ADS)

    Howerton, William

    This thesis presents a method for the integration of complex network control algorithms with localized agent specific algorithms for maneuvering and obstacle avoidance. This method allows for successful implementation of group and agent specific behaviors. It has proven to be robust and will work for a variety of vehicle platforms. Initially, a review and implementation of two specific algorithms will be detailed. The first, a modified Kuramoto model was developed by Xu [1] which utilizes tools from graph theory to efficiently perform the task of distributing agents. The second algorithm developed by Kim [2] is an effective method for wheeled robots to avoid local obstacles using a limit-cycle navigation method. The results of implementing these methods on a test-bed of wheeled robots will be presented. Control issues related to outside disturbances not anticipated in the original theory are then discussed. A novel method of using simulated agents to separate the task of distributing agents from agent specific velocity and heading commands has been developed and implemented to address these issues. This new method can be used to combine various behaviors and is not limited to a specific control algorithm.

  15. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  16. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    PubMed Central

    2010-01-01

    Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735

  17. Squad-Level Soldier-Robot Dynamics: Exploring Future Concepts Involving Intelligent Autonomous Robots

    DTIC Science & Technology

    2015-02-01

    unanimous for the run and duck commands as other commands commonly used. The verbal commands surveyed, as well as other suggested verbal commands that...stop, and duck . Additional verbal commands suggested were shut down, follow, destroy, status, and move out. The verbal commands surveyed and the...identify the verbal commands you would use to control the squad and the ASM: Phrase Yes No Halt 9 3 Stop 9 3 Move 11 1 Run 7 5 Duck 6 6 Other

  18. Survey of Command Execution Systems for NASA Spacecraft and Robots

    NASA Technical Reports Server (NTRS)

    Verma, Vandi; Jonsson, Ari; Simmons, Reid; Estlin, Tara; Levinson, Rich

    2005-01-01

    NASA spacecraft and robots operate at long distances from Earth Command sequences generated manually, or by automated planners on Earth, must eventually be executed autonomously onboard the spacecraft or robot. Software systems that execute commands onboard are known variously as execution systems, virtual machines, or sequence engines. Every robotic system requires some sort of execution system, but the level of autonomy and type of control they are designed for varies greatly. This paper presents a survey of execution systems with a focus on systems relevant to NASA missions.

  19. Hybrid Exploration Agent Platform and Sensor Web System

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; VanSteenberg, Michael E.

    2004-01-01

    A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.

  20. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  1. Design of multifunction anti-terrorism robotic system based on police dog

    NASA Astrophysics Data System (ADS)

    You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie

    2007-11-01

    Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.

  2. Overcoming Robot-Arm Joint Singularities

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Houck, J. A.

    1986-01-01

    Kinematic equations allow arm to pass smoothly through singular region. Report discusses mathematical singularities in equations of robotarm control. Operator commands robot arm to move in direction relative to its own axis system by specifying velocity in that direction. Velocity command then resolved into individual-joint rotational velocities in robot arm to effect motion. However, usual resolved-rate equations become singular when robot arm is straightened.

  3. TRICCS: A proposed teleoperator/robot integrated command and control system for space applications

    NASA Technical Reports Server (NTRS)

    Will, R. W.

    1985-01-01

    Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.

  4. Fast Grasp Contact Computation for a Serial Robot

    NASA Technical Reports Server (NTRS)

    Hargrave, Brian (Inventor); Shi, Jianying (Inventor); Diftler, Myron A. (Inventor)

    2015-01-01

    A system includes a controller and a serial robot having links that are interconnected by a joint, wherein the robot can grasp a three-dimensional (3D) object in response to a commanded grasp pose. The controller receives input information, including the commanded grasp pose, a first set of information describing the kinematics of the robot, and a second set of information describing the position of the object to be grasped. The controller also calculates, in a two-dimensional (2D) plane, a set of contact points between the serial robot and a surface of the 3D object needed for the serial robot to achieve the commanded grasp pose. A required joint angle is then calculated in the 2D plane between the pair of links using the set of contact points. A control action is then executed with respect to the motion of the serial robot using the required joint angle.

  5. Generic command interpreter for robot controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, J.

    1991-04-09

    Generic command interpreter programs have been written for robot controllers at Sandia National Laboratories (SNL). Each interpreter program resides on a robot controller and interfaces the controller with a supervisory program on another (host) computer. We call these interpreter programs monitors because they wait, monitoring a communication line, for commands from the supervisory program. These monitors are designed to interface with the object-oriented software structure of the supervisory programs. The functions of the monitor programs are written in each robot controller's native language but reflect the object-oriented functions of the supervisory programs. These functions and other specifics of the monitormore » programs written for three different robots at SNL will be discussed. 4 refs., 4 figs.« less

  6. Redundant arm control in a supervisory and shared control system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.

    1992-01-01

    The Extended Task Space Control approach to robotic operations based on manipulator behaviors derived from task requirements is described. No differentiation between redundant and non-redundant robots is made at the task level. The manipulation task behaviors are combined into a single set of motion commands. The manipulator kinematics are used subsequently in mapping motion commands into actuator commands. Extended Task Space Control is applied to a Robotics Research K-1207 seven degree-of-freedom manipulator in a supervisory telerobot system as an example.

  7. Planning and Teaching Compliant Motion Strategies.

    DTIC Science & Technology

    1987-01-01

    commanded motion. The black polyhedron shown in the figure contains a set of commanded positions. The robot is to aim for any point in the polyhedron . The...between the T-shape and the hole face will cause it to stop there. The black polyhedron is behind and more narrow than the stopping region to account for...motion. If the robot aims for any commanded position in the black polyhedron shown in the figure, then the robot will enter the second hole, slide along

  8. A satellite orbital testbed for SATCOM using mobile robots

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Lu, Wenjie; Wang, Zhonghai; Jia, Bin; Wang, Gang; Wang, Tao; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2016-05-01

    This paper develops and evaluates a satellite orbital testbed (SOT) for satellite communications (SATCOM). SOT can emulate the 3D satellite orbit using the omni-wheeled robots and a robotic arm. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The former actions are emulated by omni-wheeled robots while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. The emulated satellite positions will go to the measure model, whose results will be used to perform multiple space object tracking. Then the tracking results will go to the maneuver detection and collision alert. The satellite maneuver commands will be translated to robots commands and robotic arm commands. In SATCOM, the effects of jamming depend on the range and angles of the positions of satellite transponder relative to the jamming satellite. We extend the SOT to include USRP transceivers. In the extended SOT, the relative ranges and angles are implemented using omni-wheeled robots and robotic arms.

  9. Human Robotic Swarm Interaction Using an Artificial Physics Approach

    DTIC Science & Technology

    2014-12-01

    calculates virtual forces that are summed and translated into velocity commands. The virtual forces are modeled after real physical forces such as...results from the physical experiments show that an artificial physics-based framework is an effective way to allow multiple agents to follow a human... modeled after real physical forces such as gravitational and Coulomb, forces but are not restricted to them, for example, the force magnitude may not be

  10. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko

    2011-01-01

    Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed studies, participants performance during teleoperation of a robot arm will be compared when they are provided with command-guidance symbology (that is, directing the operator what commands to make) or situation-guidance symbology (that is, providing natural cues so that the operator can infer what commands to make). The second factor for AR symbology is the effects of overlays that are either superimposed or integrated into the external view of the world. A study is proposed in which the effects of superimposed and integrated overlays on operator task performance during teleoperated driving tasks are compared

  11. Towards Human-Friendly Efficient Control of Multi-Robot Teams

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus

    2013-01-01

    This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.

  12. Predictive Interfaces for Long-Distance Tele-Operations

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas

    2005-01-01

    We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.

  13. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail; Shum, Simon J. Buckingham; Shadbolt, Nigel; hide

    2007-01-01

    Mobile Agents (MA) is an advanced Extra-Vehicular Activity (EVA) communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. MA is voice controlled and provides information verbally to the astronauts through programs called "personal agents." The system partly automates the role of CapCom in Apollo-including monitoring and managing navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. Data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in authentic work contexts, including six years of ethnographic observation of field geology. Analog field experiments in Utah enabled empirically discovering requirements and testing alternative technologies and protocols. We report on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to serve as a relay on the wireless network.

  14. Effect of motor dynamics on nonlinear feedback robot arm control

    NASA Technical Reports Server (NTRS)

    Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping

    1991-01-01

    A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.

  15. Using arm and hand gestures to command robots during stealth operations

    NASA Astrophysics Data System (ADS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-06-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  16. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  17. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.

  18. Laboratory testing of candidate robotic applications for space

    NASA Technical Reports Server (NTRS)

    Purves, R. B.

    1987-01-01

    Robots have potential for increasing the value of man's presence in space. Some categories with potential benefit are: (1) performing extravehicular tasks like satellite and station servicing, (2) supporting the science mission of the station by manipulating experiment tasks, and (3) performing intravehicular activities which would be boring, tedious, exacting, or otherwise unpleasant for astronauts. An important issue in space robotics is selection of an appropriate level of autonomy. In broad terms three levels of autonomy can be defined: (1) teleoperated - an operator explicitly controls robot movement; (2) telerobotic - an operator controls the robot directly, but by high-level commands, without, for example, detailed control of trajectories; and (3) autonomous - an operator supplies a single high-level command, the robot does all necessary task sequencing and planning to satisfy the command. Researchers chose three projects for their exploration of technology and implementation issues in space robots, one each of the three application areas, each with a different level of autonomy. The projects were: (1) satellite servicing - teleoperated; (2) laboratory assistant - telerobotic; and (3) on-orbit inventory manager - autonomous. These projects are described and some results of testing are summarized.

  19. Virtual Reality Based Support System for Layout Planning and Programming of an Industrial Robotic Work Cell

    PubMed Central

    Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663

  20. Virtual reality based support system for layout planning and programming of an industrial robotic work cell.

    PubMed

    Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.

  1. The contaminant analysis automation robot implementation for the automated laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-12-31

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less

  2. Human-Robot Interaction Directed Research Project

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee

    2014-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.

  3. Human-Robot Cooperation with Commands Embedded in Actions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kazuki; Yamada, Seiji

    In this paper, we first propose a novel interaction model, CEA (Commands Embedded in Actions). It can explain the way how some existing systems reduce the work-load of their user. We next extend the CEA and build ECEA (Extended CEA) model. The ECEA enables robots to achieve more complicated tasks. On this extension, we employ ACS (Action Coding System) which can describe segmented human acts and clarifies the relationship between user's actions and robot's actions in a task. The ACS utilizes the CEA's strong point which enables a user to send a command to a robot by his/her natural action for the task. The instance of the ECEA led by using the ACS is a temporal extension which has the user keep a final state of a previous his/her action. We apply the temporal extension of the ECEA for a sweeping task. The high-level task, a cooperative task between the user and the robot can be realized. The robot with simple reactive behavior can sweep the region of under an object when the user picks up the object. In addition, we measure user's cognitive loads on the ECEA and a traditional method, DCM (Direct Commanding Method) in the sweeping task, and compare between them. The results show that the ECEA has a lower cognitive load than the DCM significantly.

  4. Single-Command Approach and Instrument Placement by a Robot on a Target

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang

    2005-01-01

    AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.

  5. Translational control of a graphically simulated robot arm by kinematic rate equations that overcome elbow joint singularity

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Houck, J. A.; Carzoo, S. W.

    1984-01-01

    An operator commands a robot hand to move in a certain direction relative to its own axis system by specifying a velocity in that direction. This velocity command is then resolved into individual joint rotational velocities in the robot arm to effect the motion. However, the usual resolved-rate equations become singular when the robot arm is straightened. To overcome this elbow joint singularity, equations were developed which allow continued translational control of the robot hand even though the robot arm is (or is nearly) fully extended. A feature of the equations near full arm extension is that an operator simply extends and retracts the robot arm to reverse the direction of the elbow bend (difficult maneuver for the usual resolved-rate equations). Results show successful movement of a graphically simulated robot arm.

  6. Intelligent behavior generator for autonomous mobile robots using planning-based AI decision making and supervisory control logic

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Martin, Jason; Flann, Nicholas S.; Moore, Kevin L.

    2002-07-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) have been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). One among the several out growths of this work has been the development of a grammar-based approach to intelligent behavior generation for commanding autonomous robotic vehicles. In this paper we describe the use of this grammar for enabling autonomous behaviors. A supervisory task controller (STC) sequences high-level action commands (taken from the grammar) to be executed by the robot. It takes as input a set of goals and a partial (static) map of the environment and produces, from the grammar, a flexible script (or sequence) of the high-level commands that are to be executed by the robot. The sequence is derived by a planning function that uses a graph-based heuristic search (A* -algorithm). Each action command has specific exit conditions that are evaluated by the STC following each task completion or interruption (in the case of disturbances or new operator requests). Depending on the system's state at task completion or interruption (including updated environmental and robot sensor information), the STC invokes a reactive response. This can include sequencing the pending tasks or initiating a re-planning event, if necessary. Though applicable to a wide variety of autonomous robots, an application of this approach is demonstrated via simulations of ODIS, an omni-directional inspection system developed for security applications.

  7. Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot

    NASA Astrophysics Data System (ADS)

    Kelley, Leah; Ostovari, Saam; Burmeister, Aaron B.; Talke, Kurt A.; Pezeshkian, Narek; Rahimi, Amin; Hart, Abraham B.; Nguyen, Hoa G.

    2015-05-01

    A novel, multi-segmented magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative motion connect front and rear driving modules, so the robot can twist and turn. This permits its navigation over surface discontinuities while maintaining its adhesion to the hull. During operation, the magnetic crawler receives forward and turning velocity commands from either a tele-operator or high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the development of a simple, low-level, leader-follower controller that permits the rear module to follow the front module. The kinematics and dynamics of the two-module magnetic crawler robot are described. The robot's geometry, kinematic constraints and the user-commanded velocities are used to calculate the desired instantaneous center of rotation and the corresponding central-linkage angle necessary for the back module to follow the front module when turning. The commands to the rear driving motors are determined by applying PID control on the error between the desired and measured linkage angle position. The controller is designed and tested using Matlab Simulink. It is then implemented and tested on an early two-module magnetic crawler prototype robot. Results of the simulations and experimental validation of the controller design are presented.

  8. Towards a new modality-independent interface for a robotic wheelchair.

    PubMed

    Bastos-Filho, Teodiano Freire; Cheein, Fernando Auat; Müller, Sandra Mara Torres; Celeste, Wanderley Cardoso; de la Cruz, Celso; Cavalieri, Daniel Cruz; Sarcinelli-Filho, Mário; Amaral, Paulo Faria Santos; Perez, Elisa; Soria, Carlos Miguel; Carelli, Ricardo

    2014-05-01

    This work presents the development of a robotic wheelchair that can be commanded by users in a supervised way or by a fully automatic unsupervised navigation system. It provides flexibility to choose different modalities to command the wheelchair, in addition to be suitable for people with different levels of disabilities. Users can command the wheelchair based on their eye blinks, eye movements, head movements, by sip-and-puff and through brain signals. The wheelchair can also operate like an auto-guided vehicle, following metallic tapes, or in an autonomous way. The system is provided with an easy to use and flexible graphical user interface onboard a personal digital assistant, which is used to allow users to choose commands to be sent to the robotic wheelchair. Several experiments were carried out with people with disabilities, and the results validate the developed system as an assistive tool for people with distinct levels of disability.

  9. Little AI: Playing a constructivist robot

    NASA Astrophysics Data System (ADS)

    Georgeon, Olivier L.

    Little AI is a pedagogical game aimed at presenting the founding concepts of constructivist learning and developmental Artificial Intelligence. It primarily targets students in computer science and cognitive science but it can also interest the general public curious about these topics. It requires no particular scientific background; even children can find it entertaining. Professors can use it as a pedagogical resource in class or in online courses. The player presses buttons to control a simulated "baby robot". The player cannot see the robot and its environment, and initially ignores the effects of the commands. The only information received by the player is feedback from the player's commands. The player must learn, at the same time, the functioning of the robot's body and the structure of the environment from patterns in the stream of commands and feedback. We argue that this situation is analogous to how infants engage in early-stage developmental learning (e.g., Piaget (1937), [1]).

  10. A graphical, rule based robotic interface system

    NASA Technical Reports Server (NTRS)

    Mckee, James W.; Wolfsberger, John

    1988-01-01

    The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.

  11. Knowledge representation system for assembly using robots

    NASA Technical Reports Server (NTRS)

    Jain, A.; Donath, M.

    1987-01-01

    Assembly robots combine the benefits of speed and accuracy with the capability of adaptation to changes in the work environment. However, an impediment to the use of robots is the complexity of the man-machine interface. This interface can be improved by providing a means of using a priori-knowledge and reasoning capabilities for controlling and monitoring the tasks performed by robots. Robots ought to be able to perform complex assembly tasks with the help of only supervisory guidance from human operators. For such supervisory quidance, it is important to express the commands in terms of the effects desired, rather than in terms of the motion the robot must undertake in order to achieve these effects. A suitable knowledge representation can facilitate the conversion of task level descriptions into explicit instructions to the robot. Such a system would use symbolic relationships describing the a priori information about the robot, its environment, and the tasks specified by the operator to generate the commands for the robot.

  12. Exact nonlinear command generation and tracking for robot manipulators and spacecraft slewing maneuvers

    NASA Technical Reports Server (NTRS)

    Dywer, T. A. W., III; Lee, G. K. F.

    1984-01-01

    In connection with the current interest in agile spacecraft maneuvers, it has become necessary to consider the nonlinear coupling effects of multiaxial rotation in the treatment of command generation and tracking problems. Multiaxial maneuvers will be required in military missions involving a fast acquisition of moving targets in space. In addition, such maneuvers are also needed for the efficient operation of robot manipulators. Attention is given to details regarding the direct nonlinear command generation and tracking, an approach which has been successfully applied to the design of control systems for V/STOL aircraft, linearizing transformations for spacecraft controlled with external thrusters, the case of flexible spacecraft dynamics, examples from robot dynamics, and problems of implementation and testing.

  13. INL Multi-Robot Control Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robot’s condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.

  14. Remotely controlling of mobile robots using gesture captured by the Kinect and recognized by machine learning method

    NASA Astrophysics Data System (ADS)

    Hsu, Roy CHaoming; Jian, Jhih-Wei; Lin, Chih-Chuan; Lai, Chien-Hung; Liu, Cheng-Ting

    2013-01-01

    The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.

  15. Haptic/graphic rehabilitation: integrating a robot into a virtual environment library and applying it to stroke therapy.

    PubMed

    Sharp, Ian; Patton, James; Listenberger, Molly; Case, Emily

    2011-08-08

    Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.

  16. Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    NASA Technical Reports Server (NTRS)

    Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel

    2015-01-01

    The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.

  17. Resource allocation and supervisory control architecture for intelligent behavior generation

    NASA Astrophysics Data System (ADS)

    Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason

    2003-09-01

    In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.

  18. Autonomous intelligent assembly systems LDRD 105746 final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2013-04-01

    This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less

  19. Applications of artificial intelligence to space station and automated software techniques: High level robot command language

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1989-01-01

    The objective is to develop a system that will allow a person not necessarily skilled in the art of programming robots to quickly and naturally create the necessary data and commands to enable a robot to perform a desired task. The system will use a menu driven graphical user interface. This interface will allow the user to input data to select objects to be moved. There will be an imbedded expert system to process the knowledge about objects and the robot to determine how they are to be moved. There will be automatic path planning to avoid obstacles in the work space and to create a near optimum path. The system will contain the software to generate the required robot instructions.

  20. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  1. A Computational Model of Spatial Development

    NASA Astrophysics Data System (ADS)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  2. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhaus, Maarten; Alena, Richard L.; Berrios, Daniel; Dowding, John; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail

    2005-01-01

    We have developed and tested an advanced EVA communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. This system, called Mobile Agents (MA), is voice controlled and provides information verbally to the astronauts through programs called personal agents. The system partly automates the role of CapCom in Apollo-including monitoring and managing EVA navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. EVA data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in the context of use, including six years of ethnographic observation of field geology. Our approach is to develop automation that supports the human work practices, allowing people to do what they do well, and to work in ways they are most familiar. Field experiments in Utah have enabled empirically discovering requirements and testing alternative technologies and protocols. This paper reports on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to move and wait in various locations to serve as a relay on the wireless network. The MA system is applicable to many space work situations that involve creating and navigating from maps (including configuring equipment for local topology), interacting with piloted and unpiloted rovers, adapting to environmental conditions, and remote team collaboration involving people and robots.

  3. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  4. Easy robot programming for beginners and kids using augmented reality environments

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Nishiguchi, Masahiro

    2010-11-01

    The authors have developed the mobile robot which can be programmed by command and instruction cards. All you have to do is to arrange cards on a table and to shot the programming stage by a camera. Our card programming system recognizes instruction cards and translates icon commands into the motor driver program. This card programming environment also provides low-level structure programming.

  5. Robonaut 2 and Watson: Cognitive Dexterity for Future Exploration

    NASA Technical Reports Server (NTRS)

    Badger, Julia M.; Strawser, Philip; Farrell, Logan; Goza, S. Michael; Claunch, Charles A.; Chancey, Raphael; Potapinski, Russell

    2018-01-01

    Future exploration missions will dictate a level of autonomy never before experienced in human spaceflight. Mission plans involving the uncrewed phases of complex human spacecraft in deep space will require a coordinated autonomous capability to be able to maintain the spacecraft when ground control is not available. One promising direction involves embedding intelligence into the system design both through the employment of state-of-the-art system engineering principles as well as through the creation of a cognitive network between a smart spacecraft or habitat and embodiments of cognitive agents. The work described here details efforts to integrate IBM's Watson and other cognitive computing services into NASA Johnson Space Center (JSC)'s Robonaut 2 (R2) anthropomorphic robot. This paper also discusses future directions this work will take. A cognitive spacecraft management system that is able to seamlessly collect data from subsystems, determine corrective actions, and provide commands to enable those actions is the end goal. These commands could be to embedded spacecraft systems or to a set of robotic assets that are tied into the cognitive system. An exciting collaboration with Woodside provides a promising Earth-bound testing analog, as controlling and maintaining not normally manned off-shore platforms have similar constraints to the space missions described.

  6. Behavioral networks as a model for intelligent agents

    NASA Technical Reports Server (NTRS)

    Sliwa, Nancy E.

    1990-01-01

    On-going work at NASA Langley Research Center in the development and demonstration of a paradigm called behavioral networks as an architecture for intelligent agents is described. This work focuses on the need to identify a methodology for smoothly integrating the characteristics of low-level robotic behavior, including actuation and sensing, with intelligent activities such as planning, scheduling, and learning. This work assumes that all these needs can be met within a single methodology, and attempts to formalize this methodology in a connectionist architecture called behavioral networks. Behavioral networks are networks of task processes arranged in a task decomposition hierarchy. These processes are connected by both command/feedback data flow, and by the forward and reverse propagation of weights which measure the dynamic utility of actions and beliefs.

  7. A learning controller for nonrepetitive robotic operation

    NASA Technical Reports Server (NTRS)

    Miller, W. T., III

    1987-01-01

    A practical learning control system is described which is applicable to complex robotic and telerobotic systems involving multiple feedback sensors and multiple command variables. In the controller, the learning algorithm is used to learn to reproduce the nonlinear relationship between the sensor outputs and the system command variables over particular regions of the system state space, rather than learning the actuator commands required to perform a specific task. The learned information is used to predict the command signals required to produce desired changes in the sensor outputs. The desired sensor output changes may result from automatic trajectory planning or may be derived from interactive input from a human operator. The learning controller requires no a priori knowledge of the relationships between the sensor outputs and the command variables. The algorithm is well suited for real time implementation, requiring only fixed point addition and logical operations. The results of learning experiments using a General Electric P-5 manipulator interfaced to a VAX-11/730 computer are presented. These experiments involved interactive operator control, via joysticks, of the position and orientation of an object in the field of view of a video camera mounted on the end of the robot arm.

  8. Compliance control with embedded neural elements

    NASA Technical Reports Server (NTRS)

    Venkataraman, S. T.; Gulati, S.

    1992-01-01

    The authors discuss a control approach that embeds the neural elements within a model-based compliant control architecture for robotic tasks that involve contact with unstructured environments. Compliance control experiments have been performed on actual robotics hardware to demonstrate the performance of contact control schemes with neural elements. System parameters were identified under the assumption that environment dynamics have a fixed nonlinear structure. A robotics research arm, placed in contact with a single degree-of-freedom electromechanical environment dynamics emulator, was commanded to move through a desired trajectory. The command was implemented by using a compliant control strategy.

  9. Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations

    NASA Technical Reports Server (NTRS)

    Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana

    2011-01-01

    We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.

  10. Improved CLARAty Functional-Layer/Decision-Layer Interface

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang

    2008-01-01

    Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.

  11. Explanation Capabilities for Behavior-Based Robot Control

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.

    2012-01-01

    A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn't coincide with the "mental models" of human operators. One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.

  12. Tier-scalable reconnaissance: the future in autonomous C4ISR systems has arrived: progress towards an outdoor testbed

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; Brooks, Alexander J.-W.; Tarbell, Mark A.; Dohm, James M.

    2017-05-01

    Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-)deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous C4ISR systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous tele-commanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).

  13. Research into command, control, and communications in space construction

    NASA Technical Reports Server (NTRS)

    Davis, Randal

    1990-01-01

    Coordinating and controlling large numbers of autonomous or semi-autonomous robot elements in a space construction activity will present problems that are very different from most command and control problems encountered in the space business. As part of our research into the feasibility of robot constructors in space, the CSC Operations Group is examining a variety of command, control, and communications (C3) issues. Two major questions being asked are: can we apply C3 techniques and technologies already developed for use in space; and are there suitable terrestrial solutions for extraterrestrial C3 problems? An overview of the control architectures, command strategies, and communications technologies that we are examining is provided and plans for simulations and demonstrations of our concepts are described.

  14. A Practical Comparison of Motion Planning Techniques for Robotic Legs in Environments with Obstacles

    NASA Technical Reports Server (NTRS)

    Smith, Tristan B.; Chavez-Clemente, Daniel

    2009-01-01

    ATHLETE is a large six-legged tele-operated robot. Each foot is a wheel; travel can be achieved by walking, rolling, or some combination of the two. Operators control ATHLETE by selecting parameterized commands from a command dictionary. While rolling can be done efficiently, any motion involving steps is cumbersome - each step can require multiple commands and take many minutes to complete. In this paper, we consider four different algorithms that generate a sequence of commands to take a step. We consider a baseline heuristic, a randomized motion planning algorithm, and two variants of A* search. Results for a variety of terrains are presented, and we discuss the quantitative and qualitative tradeoffs between the approaches.

  15. Multi-Touch Interaction for Robot Command and Control

    DTIC Science & Technology

    2010-12-01

    153 7.3.2 Multi-hand and Multi-finger Gesturing . . . . . . . . . . . 154 7.3.3 Handwriting ...response (real or training exercise), support personnel cannot stop the command staff and say , “We will now have an hour long demonstration of the gesture...not to say that the real-world movement of the robot is without the “problems” of inertia, friction, and other physics, but from the user’s perspective

  16. Command Recognition of Robot with Low Dimension Whole-Body Haptic Sensor

    NASA Astrophysics Data System (ADS)

    Ito, Tatsuya; Tsuji, Toshiaki

    The authors have developed “haptic armor”, a whole-body haptic sensor that has an ability to estimate contact position. Although it is developed for safety assurance of robots in human environment, it can also be used as an interface. This paper proposes a command recognition method based on finger trace information. This paper also discusses some technical issues for improving recognition accuracy of this system.

  17. ROMPS critical design review. Volume 2: Robot module design documentation

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1992-01-01

    The robot module design documentation for the Remote Operated Materials Processing in Space (ROMPS) experiment is compiled. This volume presents the following information: robot module modifications; Easylab commands definitions and flowcharts; Easylab program definitions and flowcharts; robot module fault conditions and structure charts; and C-DOC flow structure and cross references.

  18. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  19. Intelligent Autonomy for Unmanned Surface and Underwater Vehicles

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Woodward, Gail

    2011-01-01

    As the Autonomous Underwater Vehicle (AUV) and Autonomous Surface Vehicle (ASV) platforms mature in endurance and reliability, a natural evolution will occur towards longer, more remote autonomous missions. This evolution will require the development of key capabilities that allow these robotic systems to perform a high level of on-board decisionmaking, which would otherwise be performed by humanoperators. With more decision making capabilities, less a priori knowledge of the area of operations would be required, as these systems would be able to sense and adapt to changing environmental conditions, such as unknown topography, currents, obstructions, bays, harbors, islands, and river channels. Existing vehicle sensors would be dual-use; that is they would be utilized for the primary mission, which may be mapping or hydrographic reconnaissance; as well as for autonomous hazard avoidance, route planning, and bathymetric-based navigation. This paper describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities. The results of some on-water tests with US Navy technology test platforms are also presented.

  20. Robotics research projects report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsia, T.C.

    The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)

  1. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    NASA Technical Reports Server (NTRS)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  2. Forming Human-Robot Teams Across Time and Space

    NASA Technical Reports Server (NTRS)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot, so it involves both internal fault detection as well as force sensing for unintended external contacts. The designs for the supervisory command mode and the redundant safety system will be described. Specific implementations were developed and test results will be reported. Experiments were conducted using terrestrial analogs for deep space missions, where time delays were artificially added to emulate the longer distances found in space.

  3. Cooperative Three-Robot System for Traversing Steep Slopes

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terrance; Aghazarian, Hrand; Younse, Paulo; Garrett, Michael

    2009-01-01

    Teamed Robots for Exploration and Science in Steep Areas (TRESSA) is a system of three autonomous mobile robots that cooperate with each other to enable scientific exploration of steep terrain (slope angles up to 90 ). Originally intended for use in exploring steep slopes on Mars that are not accessible to lone wheeled robots (Mars Exploration Rovers), TRESSA and systems like TRESSA could also be used on Earth for performing rescues on steep slopes and for exploring steep slopes that are too remote or too dangerous to be explored by humans. TRESSA is modeled on safe human climbing of steep slopes, two key features of which are teamwork and safety tethers. Two of the autonomous robots, denoted Anchorbots, remain at the top of a slope; the third robot, denoted the Cliffbot, traverses the slope. The Cliffbot drives over the cliff edge supported by tethers, which are payed out from the Anchorbots (see figure). The Anchorbots autonomously control the tension in the tethers to counter the gravitational force on the Cliffbot. The tethers are payed out and reeled in as needed, keeping the body of the Cliffbot oriented approximately parallel to the local terrain surface and preventing wheel slip by controlling the speed of descent or ascent, thereby enabling the Cliffbot to drive freely up, down, or across the slope. Due to the interactive nature of the three-robot system, the robots must be very tightly coupled. To provide for this tight coupling, the TRESSA software architecture is built on a combination of (1) the multi-robot layered behavior-coordination architecture reported in "An Architecture for Controlling Multiple Robots" (NPO-30345), NASA Tech Briefs, Vol. 28, No. 10 (October 2004), page 65, and (2) the real-time control architecture reported in "Robot Electronics Architecture" (NPO-41784), NASA Tech Briefs, Vol. 32, No. 1 (January 2008), page 28. The combination architecture makes it possible to keep the three robots synchronized and coordinated, to use data from all three robots for decision- making at each step, and to control the physical connections among the robots. In addition, TRESSA (as in prior systems that have utilized this architecture) , incorporates a capability for deterministic response to unanticipated situations from yet another architecture reported in Control Architecture for Robotic Agent Command and Sensing (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40. Tether tension control is a major consideration in the design and operation of TRESSA. Tension is measured by force sensors connected to each tether at the Cliffbot. The direction of the tension (both azimuth and elevation) is also measured. The tension controller combines a controller to counter gravitational force and an optional velocity controller that anticipates the motion of the Cliffbot. The gravity controller estimates the slope angle from the inclination of the tethers. This angle and the weight of the Cliffbot determine the total tension needed to counteract the weight of the Cliffbot. The total needed tension is broken into components for each Anchorbot. The difference between this needed tension and the tension measured at the Cliffbot constitutes an error signal that is provided to the gravity controller. The velocity controller computes the tether speed needed to produce the desired motion of the Cliffbot. Another major consideration in the design and operation of TRESSA is detection of faults. Each robot in the TRESSA system monitors its own performance and the performance of its teammates in order to detect any system faults and prevent unsafe conditions. At startup, communication links are tested and if any robot is not communicating, the system refuses to execute any motion commands. Prior to motion, the Anchorbots attempt to set tensions in the tethers at optimal levels for counteracting the weight of the Cliffbot; if either Anchorbot fails to reach its optimal tension level within a specified time, it sends message to the other robots and the commanded motion is not executed. If any mechanical error (e.g., stalling of a motor) is detected, the affected robot sends a message triggering stoppage of the current motion. Lastly, messages are passed among the robots at each time step (10 Hz) to share sensor information during operations. If messages from any robot cease for more than an allowable time interval, the other robots detect the communication loss and initiate stoppage.

  4. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems

    ERIC Educational Resources Information Center

    Ham, MyungJoo

    2009-01-01

    We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…

  5. Torque Control of Underactuated Tendon-driven Robotic Fingers

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Wampler, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor); Reiland, Matthew J. (Inventor); Diftler, Myron A. (Inventor); Bridgwater, Lyndon (Inventor); Platt, Robert (Inventor)

    2013-01-01

    A robotic system includes a robot having a total number of degrees of freedom (DOF) equal to at least n, an underactuated tendon-driven finger driven by n tendons and n DOF, the finger having at least two joints, being characterized by an asymmetrical joint radius in one embodiment. A controller is in communication with the robot, and controls actuation of the tendon-driven finger using force control. Operating the finger with force control on the tendons, rather than position control, eliminates the unconstrained slack-space that would have otherwise existed. The controller may utilize the asymmetrical joint radii to independently command joint torques. A method of controlling the finger includes commanding either independent or parameterized joint torques to the controller to actuate the fingers via force control on the tendons.

  6. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    PubMed

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. The Canonical Robot Command Language (CRCL).

    PubMed

    Proctor, Frederick M; Balakirsky, Stephen B; Kootbally, Zeid; Kramer, Thomas R; Schlenoff, Craig I; Shackleford, William P

    2016-01-01

    Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information.

  8. The Canonical Robot Command Language (CRCL)

    PubMed Central

    Proctor, Frederick M.; Balakirsky, Stephen B.; Kootbally, Zeid; Kramer, Thomas R.; Schlenoff, Craig I.; Shackleford, William P.

    2017-01-01

    Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information. PMID:28529393

  9. 32 CFR 700.811 - Dealers, tradesmen, and agents.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... REGULATIONS AND OFFICIAL RECORDS UNITED STATES NAVY REGULATIONS AND OFFICIAL RECORDS The Commanding Officer Commanding Officers in General § 700.811 Dealers, tradesmen, and agents. (a) In general, dealers or tradesmen or their agents shall not be admitted within a command, except as authorized by the commanding...

  10. 32 CFR 700.811 - Dealers, tradesmen, and agents.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... REGULATIONS AND OFFICIAL RECORDS UNITED STATES NAVY REGULATIONS AND OFFICIAL RECORDS The Commanding Officer Commanding Officers in General § 700.811 Dealers, tradesmen, and agents. (a) In general, dealers or tradesmen or their agents shall not be admitted within a command, except as authorized by the commanding...

  11. 32 CFR 700.811 - Dealers, tradesmen, and agents.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... REGULATIONS AND OFFICIAL RECORDS UNITED STATES NAVY REGULATIONS AND OFFICIAL RECORDS The Commanding Officer Commanding Officers in General § 700.811 Dealers, tradesmen, and agents. (a) In general, dealers or tradesmen or their agents shall not be admitted within a command, except as authorized by the commanding...

  12. 32 CFR 700.811 - Dealers, tradesmen, and agents.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... REGULATIONS AND OFFICIAL RECORDS UNITED STATES NAVY REGULATIONS AND OFFICIAL RECORDS The Commanding Officer Commanding Officers in General § 700.811 Dealers, tradesmen, and agents. (a) In general, dealers or tradesmen or their agents shall not be admitted within a command, except as authorized by the commanding...

  13. 32 CFR 700.811 - Dealers, tradesmen, and agents.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... REGULATIONS AND OFFICIAL RECORDS UNITED STATES NAVY REGULATIONS AND OFFICIAL RECORDS The Commanding Officer Commanding Officers in General § 700.811 Dealers, tradesmen, and agents. (a) In general, dealers or tradesmen or their agents shall not be admitted within a command, except as authorized by the commanding...

  14. Designing speech-based interfaces for telepresence robots for people with disabilities.

    PubMed

    Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David

    2013-06-01

    People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.

  15. Bilateral Impedance Control For Telemanipulators

    NASA Technical Reports Server (NTRS)

    Moore, Christopher L.

    1993-01-01

    Telemanipulator system includes master robot manipulated by human operator, and slave robot performing tasks at remote location. Two robots electronically coupled so slave robot moves in response to commands from master robot. Teleoperation greatly enhanced if forces acting on slave robot fed back to operator, giving operator feeling he or she manipulates remote environment directly. Main advantage of bilateral impedance control: enables arbitrary specification of desired performance characteristics for telemanipulator system. Relationship between force and position modulated at both ends of system to suit requirements of task.

  16. Off-line programming motion and process commands for robotic welding of Space Shuttle main engines

    NASA Technical Reports Server (NTRS)

    Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.

    1987-01-01

    The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.

  17. A Survey of Robotic Technology.

    DTIC Science & Technology

    1983-07-01

    developed the following definition of a robot: A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized...subroutines subroutines commands to specific actuators, computations based on sensor data, etc. For instance, the job might be to assemble an automobile ...the set-up developed at Draper Labs to enable a robot to assemble an automobile alternator. The assembly operation is impressive to watch. The number

  18. Grounding language in action and perception: From cognitive agents to humanoid robots

    NASA Astrophysics Data System (ADS)

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.

  19. High level intelligent control of telerobotics systems

    NASA Technical Reports Server (NTRS)

    Mckee, James

    1988-01-01

    A high level robot command language is proposed for the autonomous mode of an advanced telerobotics system and a predictive display mechanism for the teleoperational model. It is believed that any such system will involve some mixture of these two modes, since, although artificial intelligence can facilitate significant autonomy, a system that can resort to teleoperation will always have the advantage. The high level command language will allow humans to give the robot instructions in a very natural manner. The robot will then analyze these instructions to infer meaning so that is can translate the task into lower level executable primitives. If, however, the robot is unable to perform the task autonomously, it will switch to the teleoperational mode. The time delay between control movement and actual robot movement has always been a problem in teleoperations. The remote operator may not actually see (via a monitor) the results of high actions for several seconds. A computer generated predictive display system is proposed whereby the operator can see a real-time model of the robot's environment and the delayed video picture on the monitor at the same time.

  20. Finite State Machine with Adaptive Electromyogram (EMG) Feature Extraction to Drive Meal Assistance Robot

    NASA Astrophysics Data System (ADS)

    Zhang, Xiu; Wang, Xingyu; Wang, Bei; Sugi, Takenao; Nakamura, Masatoshi

    Surface electromyogram (EMG) from elbow, wrist and hand has been widely used as an input of multifunction prostheses for many years. However, for patients with high-level limb deficiencies, muscle activities in upper-limbs are not strong enough to be used as control signals. In this paper, EMG from lower-limbs is acquired and applied to drive a meal assistance robot. An onset detection method with adaptive threshold based on EMG power is proposed to recognize different muscle contractions. Predefined control commands are output by finite state machine (FSM), and applied to operate the robot. The performance of EMG control is compared with joystick control by both objective and subjective indices. The results show that FSM provides the user with an easy-performing control strategy, which successfully operates robots with complicated control commands by limited muscle motions. The high accuracy and comfortableness of the EMG-control meal assistance robot make it feasible for users with upper limbs motor disabilities.

  1. Multi-robot control interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruemmer, David J; Walton, Miles C

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less

  2. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  3. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  4. Dynamic electronic institutions in agent oriented cloud robotic systems.

    PubMed

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  5. A multimodal interface for real-time soldier-robot teaming

    NASA Astrophysics Data System (ADS)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  6. Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.

    2017-03-01

    Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.

  7. Grounding language in action and perception: from cognitive agents to humanoid robots.

    PubMed

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Command and Telemetry Latency Effects on Operator Performance during International Space Station Robotics Operations

    NASA Technical Reports Server (NTRS)

    Currie, Nancy J.; Rochlis, Jennifer

    2004-01-01

    International Space Station (ISS) operations will require the on-board crew to perform numerous robotic-assisted assembly, maintenance, and inspection activities. Current estimates for some robotically performed maintenance timelines are disproportionate and potentially exceed crew availability and duty times. Ground-based control of the ISS robotic manipulators, specifically the Special Purpose Dexterous Manipulator (SPDM), is being examined as one potential solution to alleviate the excessive amounts of crew time required for extravehicular robotic maintenance and inspection tasks.

  9. The Evolution of Three Dimensional Visualization for Commanding the Mars Rovers

    NASA Technical Reports Server (NTRS)

    Hartman, Frank R.; Wright, John; Cooper, Brian

    2014-01-01

    NASA's Jet Propulsion Laboratory has built and operated four rovers on the surface of Mars. Two and three dimensional visualization has been extensively employed to command both the mobility and robotic arm operations of these rovers. Stereo visualization has been an important component in this set of visualization techniques. This paper discusses the progression of the implementation and use of visualization techniques for in-situ operations of these robotic missions. Illustrative examples will be drawn from the results of using these techniques over more than ten years of surface operations on Mars.

  10. NASA Goddard Space Flight Center Robotic Processing System Program Automation Systems, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form. Some of the areas covered include: (1) mission requirements; (2) automation management system; (3) Space Transportation System (STS) Hitchhicker Payload; (4) Spacecraft Command Language (SCL) scripts; (5) SCL software components; (6) RoMPS EasyLab Command & Variable summary for rack stations and annealer module; (7) support electronics assembly; (8) SCL uplink packet definition; (9) SC-4 EasyLab System Memory Map; (10) Servo Axis Control Logic Suppliers; and (11) annealing oven control subsystem.

  11. Proposed Methodology for Application of Human-like gradual Multi-Agent Q-Learning (HuMAQ) for Multi-robot Exploration

    NASA Astrophysics Data System (ADS)

    Narayan Ray, Dip; Majumder, Somajyoti

    2014-07-01

    Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.

  12. Kinematic equations for resolved-rate control of an industrial robot arm

    NASA Technical Reports Server (NTRS)

    Barker, L. K.

    1983-01-01

    An operator can use kinematic, resolved-rate equations to dynamically control a robot arm by watching its response to commanded inputs. Known resolved-rate equations for the control of a particular six-degree-of-freedom industrial robot arm and proceeds to simplify the equations for faster computations are derived. Methods for controlling the robot arm in regions which normally cause mathematical singularities in the resolved-rate equations are discussed.

  13. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    PubMed

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  14. Autonomous Shepherding Behaviors of Multiple Target Steering Robots.

    PubMed

    Lee, Wonki; Kim, DaeEun

    2017-11-25

    This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots' position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach.

  15. Developmental and Evolutionary Lexicon Acquisition in Cognitive Agents/Robots with Grounding Principle: A Short Review.

    PubMed

    Rasheed, Nadia; Amin, Shamsudin H M

    2016-01-01

    Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue.

  16. Developmental and Evolutionary Lexicon Acquisition in Cognitive Agents/Robots with Grounding Principle: A Short Review

    PubMed Central

    Rasheed, Nadia; Amin, Shamsudin H. M.

    2016-01-01

    Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue. PMID:27069470

  17. Human factors optimization of virtual environment attributes for a space telerobotic control station

    NASA Astrophysics Data System (ADS)

    Lane, Jason Corde

    2000-10-01

    Remote control of underwater vehicles and other robotic systems has, up until now, proved to be a challenging task for the human operator. With technology advancements in computers and displays, computer interfaces can be used to alleviate the workload on the operator. This research introduces the concept of a commanded display, which is a graphical simulation that shows the commands sent to the actual system in real-time. The primary goal of this research was to show a commanded display as an alternative to the traditional predictive display for reducing the effects of time delay. Several experiments were used to investigate how subjects compensated for time delay under a variety of conditions while controlling a 7-degree of freedom robotic manipulator. Results indicate that time delay increased completion time linearly; this linear relationship occurred even at different manipulator speeds, varying levels of error, and when using a commanded display. The commanded display alleviated the majority of time delay effects, up to 91% reduction. The commanded display also facilitated more accurate control, reducing the number of inadvertent impacts to the task worksite, even when compared to no time delay. Even with a moderate error between the commanded and actual displays, the commanded display was still a useful tool for mitigating time delay. The way subjects controlled the manipulator with the input device was tracked and their control strategies were extracted. A correlation between the subjects' use of the input device and their task completion time was determined. The importance of stereo vision and head tracking was examined and shown to improve a subject's depth perception within a virtual environment. Reports of simulator sickness induced by display equipment, including a head mounted display and LCD shutter glasses, were compared. The results of the above testing were used to develop an effective virtual environment control station to control a multi-arm robot.

  18. Multi-agent autonomous system

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system.

  19. Architecture for Control of the K9 Rover

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Bualat, maria; Fair, Michael; Wright, Anne; Washington, Richard

    2006-01-01

    Software featuring a multilevel architecture is used to control the hardware on the K9 Rover, which is a mobile robot used in research on robots for scientific exploration and autonomous operation in general. The software consists of five types of modules: Device Drivers - These modules, at the lowest level of the architecture, directly control motors, cameras, data buses, and other hardware devices. Resource Managers - Each of these modules controls several device drivers. Resource managers can be commanded by either a remote operator or the pilot or conditional-executive modules described below. Behaviors and Data Processors - These modules perform computations for such functions as planning paths, avoiding obstacles, visual tracking, and stereoscopy. These modules can be commanded only by the pilot. Pilot - The pilot receives a possibly complex command from the remote operator or the conditional executive, then decomposes the command into (1) more-specific commands to the resource managers and (2) requests for information from the behaviors and data processors. Conditional Executive - This highest-level module interprets a command plan sent by the remote operator, determines whether resources required for execution of the plan are available, monitors execution, and, if necessary, selects an alternate branch of the plan.

  20. Maintaining Limited-Range Connectivity Among Second-Order Agents

    DTIC Science & Technology

    2016-07-07

    we consider ad-hoc networks of robotic agents with double integrator dynamics. For such networks, the connectivity maintenance problems are: (i) do...hoc networks of mobile autonomous agents. This loose ter- minology refers to groups of robotic agents with limited mobility and communica- tion...connectivity can be preserved. 3.1. Networks of robotic agents with second-order dynamics and the connectivity maintenance problem. We begin by

  1. RACE pulls for shared control

    NASA Astrophysics Data System (ADS)

    Leahy, M. B., Jr.; Cassiday, B. K.

    1993-02-01

    Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.

  2. RACE pulls for shared control

    NASA Astrophysics Data System (ADS)

    Leahy, Michael B., Jr.; Cassiday, Brian K.

    1992-11-01

    Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. An organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. The small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALCs will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.

  3. RACE pulls for shared control

    NASA Technical Reports Server (NTRS)

    Leahy, M. B., Jr.; Cassiday, B. K.

    1993-01-01

    Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. Race is an organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. Small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALC's will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry, we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.

  4. R4SA for Controlling Robots

    NASA Technical Reports Server (NTRS)

    Aghazarian, Hrand

    2009-01-01

    The R4SA GUI mentioned in the immediately preceding article is a userfriendly interface for controlling one or more robot(s). This GUI makes it possible to perform meaningful real-time field experiments and research in robotics at an unmatched level of fidelity, within minutes of setup. It provides such powerful graphing modes as that of a digitizing oscilloscope that displays up to 250 variables at rates between 1 and 200 Hz. This GUI can be configured as multiple intuitive interfaces for acquisition of data, command, and control to enable rapid testing of subsystems or an entire robot system while simultaneously performing analysis of data. The R4SA software establishes an intuitive component-based design environment that can be easily reconfigured for any robotic platform by creating or editing setup configuration files. The R4SA GUI enables event-driven and conditional sequencing similar to those of Mars Exploration Rover (MER) operations. It has been certified as part of the MER ground support equipment and, therefore, is allowed to be utilized in conjunction with MER flight hardware. The R4SA GUI could also be adapted to use in embedded computing systems, other than that of the MER, for commanding and real-time analysis of data.

  5. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  6. Kinematic rate control of simulated robot hand at or near wrist singularity

    NASA Technical Reports Server (NTRS)

    Barker, K.; Houck, J. A.; Carzoo, S. W.

    1985-01-01

    A robot hand should obey movement commands from an operator on a computer program as closely as possible. However, when two of the three rotational axes of the robot wrist are colinear, the wrist loses a degree of freedom, and the usual resolved rate equations (used to move the hand in response to an operator's inputs) are indeterminant. Furthermore, rate limiting occurs in close vicinity to this singularity. An analysis shows that rate limiting occurs not only in the vicinity of this singularity but also substantially away from it, even when the operator commands rotational rates of the robot hand that are only a small percentage of the operational joint rate limits. Therefore, joint angle rates are scaled when they exceed operational limits in a real time simulation of a robot arm. Simulation results show that a small dead band avoids the wrist singularity in the resolved rate equations but can introduce a high frequency oscillation close to the singularity. However, when a coordinated wrist movement is used in conjunction with the resolved rate equations, the high frequency oscillation disappears.

  7. A new scheme of force reflecting control

    NASA Technical Reports Server (NTRS)

    Kim, Won S.

    1992-01-01

    A new scheme of force reflecting control has been developed that incorporates position-error-based force reflection and robot compliance control. The operator is provided with a kinesthetic force feedback which is proportional to the position error between the operator-commanded and the actual position of the robot arm. Robot compliance control, which increases the effective compliance of the robot, is implemented by low pass filtering the outputs of the force/torque sensor mounted on the base of robot hand and using these signals to alter the operator's position command. This position-error-based force reflection scheme combined with shared compliance control has been implemented successfully to the Advanced Teleoperation system consisting of dissimilar master-slave arms. Stability measurements have demonstrated unprecedentedly high force reflection gains of up to 2 or 3, even though the slave arm is much stiffer than operator's hand holding the force reflecting hand controller. Peg-in-hole experiments were performed with eight different operating modes to evaluate the new force-reflecting control scheme. Best task performance resulted with this new control scheme.

  8. Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI.

    PubMed

    Chen, Xiaogang; Zhao, Bing; Wang, Yijun; Xu, Shengpu; Gao, Xiaorong

    2018-04-12

    Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.

  9. Intelligent lead: a novel HRI sensor for guide robots.

    PubMed

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  10. Kinematic control of robot with degenerate wrist

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Moore, M. C.

    1984-01-01

    Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.

  11. Research on wheelchair robot control system based on EOG

    NASA Astrophysics Data System (ADS)

    Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo

    2018-04-01

    The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.

  12. Autonomous Shepherding Behaviors of Multiple Target Steering Robots

    PubMed Central

    Lee, Wonki; Kim, DaeEun

    2017-01-01

    This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots’ position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach. PMID:29186836

  13. Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II

    DTIC Science & Technology

    2011-09-01

    for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR

  14. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  15. The Resurrection of Malthus: space as the final escape from the law of diminishing returns

    NASA Astrophysics Data System (ADS)

    Sommers, J.; Beldavs, V.

    2017-09-01

    If there is a self-sustaining space economy, which is the goal of the International Lunar Decade, then it is a subject of economic analysis. The immediate challenge of space economics then is to conceptually demonstrate how a space economy could emerge and work where markets do not exist and few human agents may be involved, in fact where human agents may transact with either human agents or robotic agents or robotic agents may transact with other robotic agents.

  16. Kennedy Space Center, Space Shuttle Processing, and International Space Station Program Overview

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott Alan

    2011-01-01

    Topics include: International Space Station assembly sequence; Electrical power substation; Thermal control substation; Guidance, navigation and control; Command data and handling; Robotics; Human and robotic integration; Additional modes of re-supply; NASA and International partner control centers; Space Shuttle ground operations.

  17. Control Program for an Optical-Calibration Robot

    NASA Technical Reports Server (NTRS)

    Johnston, Albert

    2005-01-01

    A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.

  18. Naval Sea Systems Command > Home

    Science.gov Websites

    Parties Vehicles for Partnering STEM Programs FIRST LEGO League Robotics Program Carderock Math Contest Educational Partnership Agreements Math Clubs Seaplane Challenge Calculator-Controlled Robot Program Students - 'Fun Twist on Math' May 24, 2018 More SOCIAL MEDIA Facebook Logo Join us live as we commission

  19. An A-Mazing Logo Experiment.

    ERIC Educational Resources Information Center

    Harris, Ross J.

    1983-01-01

    Discusses what can be done with a LOGO turtle robot, how it is different from doing LOGO with the computer-screen turtle, and the educational value of the device. Sample programs are provided, including one in which the robot turtle can be commanded to react to meeting an obstacle. (JN)

  20. Robot geometry calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Tso, Kam; Roston, Gerald

    1988-01-01

    Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.

  1. Coordination of multiple robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Soloway, D.

    1987-01-01

    Kinematic resolved-rate control from one robot arm is extended to the coordinated control of multiple robot arms in the movement of an object. The structure supports the general movement of one axis system (moving reference frame) with respect to another axis system (control reference frame) by one or more robot arms. The grippers of the robot arms do not have to be parallel or at any pre-disposed positions on the object. For multiarm control, the operator chooses the same moving and control reference frames for each of the robot arms. Consequently, each arm then moves as though it were carrying out the commanded motions by itself.

  2. A Biologically Inspired Cooperative Multi-Robot Control Architecture

    NASA Technical Reports Server (NTRS)

    Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)

    2002-01-01

    A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.

  3. A Stigmergic Cooperative Multi-Robot Control Architecture

    NASA Technical Reports Server (NTRS)

    Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.

    2004-01-01

    In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.

  4. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less

  5. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface

    PubMed Central

    Kim, Youngmoo E.

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712

  6. Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface.

    PubMed

    Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan

    2017-01-01

    Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.

  7. Should We Turn the Robots Loose?

    DTIC Science & Technology

    2010-05-02

    interference. Potential sources of electromagnetic interference include everyday signals such as cell phones and Wifi , intentional friendly jamming of IED...might even attempt to hack or hijack our robotic warriors. Our current enemies have proven to be very adaptable and have developed simple counters to our...demonstrates the ease with which robot command and control might be hacked . It is reasonable to suspect that a future threat with a more robust

  8. 2018 Ground Robotics Capabilities Conference and Exhibiton

    DTIC Science & Technology

    2018-04-11

    Transportable Robot System (MTRS) Inc 1 Non -standard Equipment (approved) Explosive Ordnance Disposal Common Robotic System-Heavy (CRS-H) Inc 1 AROC: 3-Star...and engineering • AI risk mitigation methodologies and techniques are at best immature – E.g., V&V; Probabilistic software analytics; code level...controller to minimize potential UxS mishaps and unauthorized Command and Control (C2). • PSP-10 – Ensure that software systems which exhibit non

  9. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  10. Controlling multiple security robots in a warehouse environment

    NASA Technical Reports Server (NTRS)

    Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.

    1994-01-01

    The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.

  11. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    DTIC Science & Technology

    2016-07-27

    synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot

  12. Development and Command-Control Tools for Many-Robot Systems

    DTIC Science & Technology

    2005-01-01

    been components such as pressure sensors and accelerometers for the automobile market. In fact, robots of any size have yet to appear in our daily...34 mode, so that the target hardware is neither reprogrammable nor rechargable. The goal of this paper is to propose some generic tools that the

  13. Target Trailing With Safe Navigation for Maritime Autonomous Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Kuwata, Yoshiaki; Zarzhitsky, Dimitri V.

    2013-01-01

    This software implements a motion-planning module for a maritime autonomous surface vehicle (ASV). The module trails a given target while also avoiding static and dynamic surface hazards. When surface hazards are other moving boats, the motion planner must apply International Regulations for Avoiding Collisions at Sea (COLREGS). A key subset of these rules has been implemented in the software. In case contact with the target is lost, the software can receive and follow a "reacquisition route," provided by a complementary system, until the target is reacquired. The programmatic intention is that the trailed target is a submarine, although any mobile naval platform could serve as the target. The algorithmic approach to combining motion with a (possibly moving) goal location, while avoiding local hazards, may be applicable to robotic rovers, automated landing systems, and autonomous airships. The software operates in JPL s CARACaS (Control Architecture for Robotic Agent Command and Sensing) software architecture and relies on other modules for environmental perception data and information on the predicted detectability of the target, as well as the low-level interface to the boat controls.

  14. Paralyzed subject controls telepresence mobile robot using novel sEMG brain-computer interface: case study.

    PubMed

    Lyons, Kenneth R; Joshi, Sanjay S

    2013-06-01

    Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.

  15. An extension of command shaping methods for controlling residual vibration using frequency sampling

    NASA Technical Reports Server (NTRS)

    Singer, Neil C.; Seering, Warren P.

    1992-01-01

    The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.

  16. Robots, systems, and methods for hazard evaluation and visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.

    A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less

  17. Development and validation of a low-cost mobile robotics testbed

    NASA Astrophysics Data System (ADS)

    Johnson, Michael; Hayes, Martin J.

    2012-03-01

    This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.

  18. Robotic Exploration: The Role of Science Autonomy

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; DeVincenzi, D. (Technical Monitor)

    2002-01-01

    Historical mission operations have involved: (1) commands transmitted to the craft; (2) execution of commands; (3) return of scientific data; (4) evaluation of these data by scientists; and (5) recommendations for future mission activity by scientists. This cycle is repeated throughout the mission with command opportunities once or twice per day. For a rover, this historical cycle is not amenable to rapid long range traverses or rapid response to any novel or unexpected situations.

  19. Agent independent task planning

    NASA Technical Reports Server (NTRS)

    Davis, William S.

    1990-01-01

    Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.

  20. Three degree-of-freedom force feedback control for robotic mating of umbilical lines

    NASA Technical Reports Server (NTRS)

    Fullmer, R. Rees

    1988-01-01

    The use of robotic manipulators for the mating and demating of umbilical fuel lines to the Space Shuttle Vehicle prior to launch is investigated. Force feedback control is necessary to minimize the contact forces which develop during mating. The objective is to develop and demonstrate a working robotic force control system. Initial experimental force control tests with an ASEA IRB-90 industrial robot using the system's Adaptive Control capabilities indicated that control stability would by a primary problem. An investigation of the ASEA system showed a 0.280 second software delay between force input commands and the output of command voltages to the servo system. This computational delay was identified as the primary cause of the instability. Tests on a second path into the ASEA's control computer using the MicroVax II supervisory computer show that time delay would be comparable, offering no stability improvement. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servosystem directly, allowing the robot to use force feedback control while in rigid contact with a moving three-degree-of-freedom target. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servo system directly. This method allowed the robot to use force feedback control while in rigid contact with moving three degree-of-freedom target. Tests on this approach indicated adequate force feedback control even under worst case conditions. A strategy to digitally-controlled vision system was developed. This requires switching between the digital controller when using vision control and the analog controller when using force control, depending on whether or not the mating plates are in contact.

  1. Robotic wheelchair commanded by SSVEP, motor imagery and word generation.

    PubMed

    Bastos, Teodiano F; Muller, Sandra M T; Benevides, Alessandro B; Sarcinelli-Filho, Mario

    2011-01-01

    This work presents a robotic wheelchair that can be commanded by a Brain Computer Interface (BCI) through Steady-State Visual Evoked Potential (SSVEP), Motor Imagery and Word Generation. When using SSVEP, a statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency, allowing volunteers to online operate the BCI, with hit rates varying from 60% to 100%, and guide a robotic wheelchair through an indoor environment. When using motor imagery and word generation, three mental task are used: imagination of left or right hand, and imagination of generation of words starting with the same random letter. Linear Discriminant Analysis is used to recognize the mental tasks, and the feature extraction uses Power Spectral Density. The choice of EEG channel and frequency uses the Kullback-Leibler symmetric divergence and a reclassification model is proposed to stabilize the classifier.

  2. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  3. Can robots be responsible moral agents? And why should we care?

    NASA Astrophysics Data System (ADS)

    Sharkey, Amanda

    2017-07-01

    This principle highlights the need for humans to accept responsibility for robot behaviour and in that it is commendable. However, it raises further questions about legal and moral responsibility. The issues considered here are (i) the reasons for assuming that humans and not robots are responsible agents, (ii) whether it is sufficient to design robots to comply with existing laws and human rights and (iii) the implications, for robot deployment, of the assumption that robots are not morally responsible.

  4. Algorithms of walking and stability for an anthropomorphic robot

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.

    2017-09-01

    Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.

  5. ATHLETE's Feet: Mu1ti-Resolution Planning for a Hexapod Robot

    NASA Technical Reports Server (NTRS)

    Smith, Tristan B.; Barreiro, Javier; Smith, David E.; SunSpiral, Vytas; Chavez-Clemente, Daniel

    2008-01-01

    ATHLETE is a large six-legged tele-operated robot. Each foot is a wheel; travel can be achieved by walking, rolling, or some combination of the two. Operators control ATHLETE by selecting parameterized commands from a command dictionary. While rolling can be done efficiently with a single command, any motion involving steps is cumbersome - walking a few meters through difficult terrain can take hours. Our goal is to improve operator efficiency by automatically generating sequences of motion commands. There is increasing uncertainty regarding ATHLETE s actual configuration over time and decreasing quality of terrain data farther away from the current position. This, combined with the complexity that results from 36 degrees of kinematic freedom, led to an architecture that interleaves planning and execution at multiple levels, ranging from traditional configuration space motion planning algorithms for immediate moves to higher level task and path planning algorithms for overall travel. The modularity of the architecture also simplifies the development process and allows the operator to interact with and control the system at varying levels of autonomy depending on terrain and need.

  6. Intelligent manipulation technique for multi-branch robotic systems

    NASA Technical Reports Server (NTRS)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  7. Initial experiments in thrusterless locomotion control of a free-flying robot

    NASA Technical Reports Server (NTRS)

    Jasper, W. J.; Cannon, R. H., Jr.

    1990-01-01

    A two-arm free-flying robot has been constructed to study thrusterless locomotion in space. This is accomplished by pushing off or landing on a large structure in a coordinated two-arm maneuver. A new control method, called system momentum control, allows the robot to follow desired momentum trajectories and thus leap or crawl from one structure to another. The robot floats on an air-cushion, simulating in two dimensions the drag-free zero-g environment of space. The control paradigm has been verified experimentally by commanding the robot to push off a bar with both arms, rotate 180 degrees, and catch itself on another bar.

  8. Types of verbal interaction with instructable robots

    NASA Technical Reports Server (NTRS)

    Crangle, C.; Suppes, P.; Michalowski, S.

    1987-01-01

    An instructable robot is one that accepts instruction in some natural language such as English and uses that instruction to extend its basic repertoire of actions. Such robots are quite different in conception from autonomously intelligent robots, which provide the impetus for much of the research on inference and planning in artificial intelligence. Examined here are the significant problem areas in the design of robots that learn from vebal instruction. Examples are drawn primarily from our earlier work on instructable robots and recent work on the Robotic Aid for the physically disabled. Natural-language understanding by machines is discussed as well as in the possibilities and limits of verbal instruction. The core problem of verbal instruction, namely, how to achieve specific concrete action in the robot in response to commands that express general intentions, is considered, as are two major challenges to instructability: achieving appropriate real-time behavior in the robot, and extending the robot's language capabilities.

  9. Whole-body Motion Planning with Simple Dynamics and Full Kinematics

    DTIC Science & Technology

    2014-08-01

    optimizations can take an excessively long time to run, and may also suffer from local minima. Thus, this approach can become intractable for complex robots...motions like jumping and climbing. Additionally, the point-mass model suggests that the centroidal angular momentum is zero, which is not valid for motions...use in the DARPA Robotics Challenge. A. Jumping Our first example is to command the robot to jump off the ground, as illustrated in Fig.4. We assign

  10. Selfie in Cupola module

    NASA Image and Video Library

    2015-05-24

    ISS043E241729 (05/24/2015) --- Expedition 43 commander and NASA astronaut Terry Virts is seen here inside of the station’s Cupola module. The Cupola is designed for the observation of operations outside the ISS such as robotic activities, the approach of vehicles, and spacewalks. It also provides spectacular views of Earth and celestial objects for use in astronaut observation experiments. It houses the robotic workstation that controls the space station’s robotic arm and can accommodate two crewmembers simultaneously.

  11. Fast and Efficient Radiological Interventions via a Graphical User Interface Commanded Magnetic Resonance Compatible Robotic Device

    PubMed Central

    Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos

    2011-01-01

    The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067

  12. New luster for space robots and automation

    NASA Technical Reports Server (NTRS)

    Heer, E.

    1978-01-01

    Consideration is given to the potential role of robotics and automation in space transportation systems. Automation development requirements are defined for projects in space exploration, global services, space utilization, and space transport. In each category the potential automation of ground operations, on-board spacecraft operations, and in-space handling is noted. The major developments of space robot technology are noted for the 1967-1978 period. Economic aspects of ground-operation, ground command, and mission operations are noted.

  13. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    PubMed Central

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. PMID:29046651

  14. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.

    PubMed

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.

  15. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  16. Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots

    NASA Technical Reports Server (NTRS)

    Chen, Vincent Wei-Kang

    1992-01-01

    Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.

  17. Grounding the Meanings in Sensorimotor Behavior using Reinforcement Learning

    PubMed Central

    Farkaš, Igor; Malík, Tomáš; Rebrová, Kristína

    2012-01-01

    The recent outburst of interest in cognitive developmental robotics is fueled by the ambition to propose ecologically plausible mechanisms of how, among other things, a learning agent/robot could ground linguistic meanings in its sensorimotor behavior. Along this stream, we propose a model that allows the simulated iCub robot to learn the meanings of actions (point, touch, and push) oriented toward objects in robot’s peripersonal space. In our experiments, the iCub learns to execute motor actions and comment on them. Architecturally, the model is composed of three neural-network-based modules that are trained in different ways. The first module, a two-layer perceptron, is trained by back-propagation to attend to the target position in the visual scene, given the low-level visual information and the feature-based target information. The second module, having the form of an actor-critic architecture, is the most distinguishing part of our model, and is trained by a continuous version of reinforcement learning to execute actions as sequences, based on a linguistic command. The third module, an echo-state network, is trained to provide the linguistic description of the executed actions. The trained model generalizes well in case of novel action-target combinations with randomized initial arm positions. It can also promptly adapt its behavior if the action/target suddenly changes during motor execution. PMID:22393319

  18. Towards Commanding Unmanned Ground Vehicle Movement in Unfamiliar Environments Using Unconstrained English: Initial Research Results

    DTIC Science & Technology

    2007-06-01

    constrained list of command words could be valuable in many systems, as would the ability of driverless vehicles to navigate through a route...Sensemaking in UGVs • Future Combat Systems UGV roles – Driverless trucks – Robotic mules (soldier, squad aid) – Intelligent munitions – And more! • Some

  19. Autonomous mobile robot teams

    NASA Technical Reports Server (NTRS)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  20. Spider World: A Robot Language for Learning to Program. Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Dalbey, John; Linn, Marcia

    Spider World is an interactive program designed to help individuals with no previous computer experience to learn the fundamentals of programming. The program emphasizes cognitive tasks which are central to programming and provides significant problem-solving opportunities. In Spider World, the user commands a hypothetical robot (called the…

  1. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135163 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  2. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135148 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  3. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135140 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  4. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135185 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  5. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135187 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  6. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135135 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  7. Robonaut 2 Humanoid Robot

    NASA Image and Video Library

    2012-03-13

    ISS030-E-135157 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.

  8. Social Studies in Motion: Learning with the Whole Person

    ERIC Educational Resources Information Center

    Schulte, Paige L.

    2005-01-01

    Total Physical Response (TPR), developed by James Asher, is defined as a teaching technique whereby a learner responds to language input with body motions. Performing a chant or the game "Robot" is an example of a TPR activity, where the teacher commands her robots to do some task in the classroom. Acting out stories and giving imperative commands…

  9. Phillips at Robotics Workstation (RWS) in US Laboratory Destiny

    NASA Image and Video Library

    2009-03-20

    S119-E-006748 (20 March 2009) --- Astronauts Lee Archambault, (foreground), STS-119 commander, John Phillips and Sandra Magnus, both mission specialists, are pictured at the robotic workstation in Destiny or the U.S. laboratory. Magnus is winding down a lengthy tour in space aboard the orbiting outpost, and she will return to Earth with the Discovery crew.

  10. Do infants perceive the social robot Keepon as a communicative partner?

    PubMed

    Peca, Andreea; Simut, Ramona; Cao, Hoang-Long; Vanderborght, Bram

    2016-02-01

    This study investigates if infants perceive an unfamiliar agent, such as the robot Keepon, as a social agent after observing an interaction between the robot and a human adult. 23 infants, aged 9-17 month, were exposed, in a first phase, to either a contingent interaction between the active robot and an active human adult, or to an interaction between an active human adult and the non-active robot, followed by a second phase, in which infants were offered the opportunity to initiate a turn-taking interaction with Keepon. The measured variables were: (1) the number of social initiations the infant directed toward the robot, and (2) the number of anticipatory orientations of attention to the agent that follows in the conversation. The results indicate a significant higher level of initiations in the interactive robot condition compared to the non-active robot condition, while the difference between the frequencies of anticipations of turn-taking behaviors was not significant. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. What Force and Metrics for What End - Characterizing the Future Leadership and Force

    DTIC Science & Technology

    2006-06-01

    interest of humanity as a whole, and may overrule all other laws whenever it seems necessary for the ultimate good. Source – Asimov , Isaac. “I, Robot...Robotics + the Zeroth Law’ ( Asimov , 2006 Command and Control Research and Technology Symposium ‘The Sate of the Art and the State of the Practice’ ASD...outcomes. Here the author returns to the introduction of the example derived from Asimov (1940, 1970) and Brin (1999) ‘four laws of robotics

  12. Issues in impedance selection and input devices for multijoint powered orthotics.

    PubMed

    Lemay, M A; Hogan, N; van Dorsten, J W

    1998-03-01

    We investigated the applicability of impedance controllers to robotic orthoses for arm movements. We had tetraplegics turn a crank using their paralyzed arm propelled by a planar robot manipulandum. The robot was under impedance control, and chin motion served as command source. Stiffness varied between 50, 100, or 200 N/m and damping varied between 5 or 15 N/m/s. Results indicated that a low stiffness and high viscosity provided better directional control of the tangential force exerted on the crank.

  13. (abstract) Telecommunications for Mars Rovers and Robotic Missions

    NASA Technical Reports Server (NTRS)

    Cesarone, Robert J.; Hastrup, Rolf C.; Horne, William; McOmber, Robert

    1997-01-01

    Telecommunications plays a key role in all rover and robotic missions to Mars both as a conduit for command information to the mission and for scientific data from the mission. Telecommunications to the Earth may be accomplished using direct-to-Earth links via the Deep Space Network (DSN) or by relay links supported by other missions at Mars. This paper reviews current plans for missions to Mars through the 2005 launch opportunity and their capabilities in support of rover and robotic telecommunications.

  14. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots.

    PubMed

    Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

  15. Coordination of dual robot arms using kinematic redundancy

    NASA Technical Reports Server (NTRS)

    Suh, Il Hong; Shin, Kang G.

    1988-01-01

    A method is developed to coordinate the motion of dual robot arms carrying a solid object, where the first robot (leader) grasps one end of the object rigidly and the second robot (follower) is allowed to change its grasping position at the other end of the object along the object surface while supporting the object. It is shown that this flexible grasping is equivalent to the addition of one more degree of freedom (dof), giving the follower more maneuvering capabilities. In particular, motion commands for the follower are generated by using kinematic redundancy. To show the utility and power of the method, an example system with two PUMA 560 robots carrying a beam is analyzed.

  16. A Decentralized Framework for Multi-Agent Robotic Systems

    PubMed Central

    2018-01-01

    Over the past few years, decentralization of multi-agent robotic systems has become an important research area. These systems do not depend on a central control unit, which enables the control and assignment of distributed, asynchronous and robust tasks. However, in some cases, the network communication process between robotic agents is overlooked, and this creates a dependency for each agent to maintain a permanent link with nearby units to be able to fulfill its goals. This article describes a communication framework, where each agent in the system can leave the network or accept new connections, sending its information based on the transfer history of all nodes in the network. To this end, each agent needs to comply with four processes to participate in the system, plus a fifth process for data transfer to the nearest nodes that is based on Received Signal Strength Indicator (RSSI) and data history. To validate this framework, we use differential robotic agents and a monitoring agent to generate a topological map of an environment with the presence of obstacles. PMID:29389849

  17. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  18. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  19. A Human Machine Interface for EVA

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.

  20. Controlling Herds of Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.

    2006-01-01

    A document poses, and suggests a program of research for answering, questions of how to achieve autonomous operation of herds of cooperative robots to be used in exploration and/or colonization of remote planets. In a typical scenario, a flock of mobile sensory robots would be deployed in a previously unexplored region, one of the robots would be designated the leader, and the leader would issue commands to move the robots to different locations or aim sensors at different targets to maximize scientific return. It would be necessary to provide for this hierarchical, cooperative behavior even in the face of such unpredictable factors as terrain obstacles. A potential-fields approach is proposed as a theoretical basis for developing methods of autonomous command and guidance of a herd. A survival-of-the-fittest approach is suggested as a theoretical basis for selection, mutation, and adaptation of a description of (1) the body, joints, sensors, actuators, and control computer of each robot, and (2) the connectivity of each robot with the rest of the herd, such that the herd could be regarded as consisting of a set of artificial creatures that evolve to adapt to a previously unknown environment. A distributed simulation environment has been developed to test the proposed approaches in the Titan environment. One blimp guides three surface sondes via a potential field approach. The results of the simulation demonstrate that the method used for control is feasible, even if significant uncertainty exists in the dynamics and environmental models, and that the control architecture provides the autonomy needed to enable surface science data collection.

  1. STS-111 Flight Day 5 Highlights

    NASA Astrophysics Data System (ADS)

    2002-06-01

    On Flight Day 5 of STS-111, the crew of Endeavour (Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist) and the Expedition 5 crew (Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer) and Expedition 4 crew (Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer) are aboard the docked Endeavour and International Space Station (ISS). The ISS cameras show the station in orbit above the North African coast and the Mediterranean Sea, as Chang-Diaz and Perrin prepare for an EVA (extravehicular activity). The Canadarm 2 robotic arm is shown in motion in a wide-angle shot. The Quest Airlock is shown as it opens to allow the astronauts to exit the station. As orbital sunrise approaches, the astronauts are shown already engaged in their EVA activities. Chang-Diaz is shown removing the PDGF (Power and Data Grapple Fixture) from Endeavour's payload bay as Perrin prepares its installation position in the ISS's P6 truss structure; The MPLM is also visible. Following the successful detachment of the PDGF, Chang-Diaz carries it to the installation site as he is transported there by the robotic arm. The astronauts are then shown installing the PDGF, with video provided by helmet-mounted cameras. Following this task, the astronauts are shown preparing the MBS (Mobile Base System) for grappling by the robotic arm. It will be mounted to the Mobile Transporter (MT), which will traverse a railroad-like system along the truss structures of the ISS, and support astronaut activities as well as provide an eventual mobile base for the robotic arm.

  2. Bio-robots automatic navigation with electrical reward stimulation.

    PubMed

    Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.

  3. A motion sensing-based framework for robotic manipulation.

    PubMed

    Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing

    2016-01-01

    To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.

  4. Next Generation Robots for STEM Education andResearch at Huston Tillotson University

    DTIC Science & Technology

    2017-11-10

    dynamics through the following command: roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion : After...understood the system’s natural dynamics. roslaunch mtb_lab6_feedback_linearization gravity_compensation.launch Part B: Gravity Inversion ...is created using the following command: roslaunch mtb_lab6_feedback_linearization gravity_inversion.launch Gravity inversion is just one

  5. A software toolbox for robotics

    NASA Technical Reports Server (NTRS)

    Sanwal, J. C.

    1985-01-01

    A method for programming cooperating manipulators, which is guided by a geometric description of the task to be performed, is given. For this a suitable language must be used and a method for describing the workplace and the objects in it in geometric terms. A task level command language and its implementation for concurrently driven multiple robot arm is described. The language is suitable for driving a cell in which manipulators, end effectors, and sensors are controlled by their own dedicated processors. These processors can communicate with each other through a communication network. A mechanism for keeping track of the history of the commands already executed allows the command language for the manipulators to be event driven. A frame based world modeling system is utilized to describe the objects in the work environment and any relationships that hold between these objects. This system provides a versatile tool for managing information about the world model. Default actions normally needed are invoked when the data base is updated or accessed. Most of the first level error recovery is also invoked by the database by utilizing the concepts of demons. The package can be utilized to generate task level commands in a problem solver or a planner.

  6. Multiagent robotic systems' ambient light sensor

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Maslennikov, Oleg S.; Komarov, Igor I.

    2017-05-01

    Swarm robotics is one of the fastest growing areas of modern technology. Being subclass of multi-agent systems it inherits the main part of scientific-methodological apparatus of construction and functioning of practically useful complexes, which consist of rather autonomous independent agents. Ambient light sensors (ALS) are widely used in robotics. But speaking about swarm robotics, the technology which has great number of specific features and is developing, we can't help mentioning that its important to use sensors on each robot not only in order to help it to get directionally oriented, but also to follow light emitted by robot-chief or to help to find the goal easier. Key words: ambient light sensor, swarm system, multiagent system, robotic system, robotic complexes, simulation modelling

  7. Tutorial Workshop on Robotics and Robot Control.

    DTIC Science & Technology

    1982-10-26

    J^V7S US ARMY TANK-AUTOMOTIVE COMMAND, WARREN MICHIGAN US ARMY MATERIEL SYSTEMS ANALYSIS ACTIVITY, ABERDEEN PROVING GROUNDS, MARYLAND ^ V&S...Technology Pasadena, California 91103 M. Vur.kovic Senior Research Associate Institute for Technoeconomic Systems Department of Industrial...Further investigation of the action precedence graphs together with their appli- cation to more complex manipulator tasks and analysis of J2. their

  8. Virts in Cupola

    NASA Image and Video Library

    2015-05-31

    ISS043E276404 (05/31/2015) --- Expedition 43 Commander and NASA astronaut Terry Virts is seen here in the International Space Station’s Cupola module, a 360 degree Earth and space viewing platform. The module also contains a robotic workstation for controlling the station’s main robotic arm, Canadarm2, which is used for a variety of operations including the remote grappling of visiting cargo vehicles.

  9. Can Robots Help the Learning of Skilled Actions?

    PubMed Central

    Reinkensmeyer, David J.; Patton, James L.

    2010-01-01

    Learning to move skillfully requires that the motor system adjusts muscle commands based on ongoing performance errors, a process influenced by the dynamics of the task being practiced. Recent experiments from our laboratories show how robotic devices can temporarily alter task dynamics in ways that contribute to the motor learning experience, suggesting possible applications in rehabilitation and sports training. PMID:19098524

  10. Monitoring and Controlling an Underwater Robotic Arm

    NASA Technical Reports Server (NTRS)

    Haas, John; Todd, Brian Keith; Woodcock, Larry; Robinson, Fred M.

    2009-01-01

    The SSRMS Module 1 software is part of a system for monitoring an adaptive, closed-loop control of the motions of a robotic arm in NASA s Neutral Buoyancy Laboratory, where buoyancy in a pool of water is used to simulate the weightlessness of outer space. This software is so named because the robot arm is a replica of the Space Shuttle Remote Manipulator System (SSRMS). This software is distributed, running on remote joint processors (RJPs), each of which is mounted in a hydraulic actuator comprising the joint of the robotic arm and communicating with a poolside processor denoted the Direct Control Rack (DCR). Each RJP executes the feedback joint-motion control algorithm for its joint and communicates with the DCR. The DCR receives joint-angular-velocity commands either locally from an operator or remotely from computers that simulate the flight like SSRMS and perform coordinated motion calculations based on hand-controller inputs. The received commands are checked for validity before they are transmitted to the RJPs. The DCR software generates a display of the statuses of the RJPs for the DCR operator and can shut down the hydraulic pump when excessive joint-angle error or failure of a RJP is detected.

  11. Apparatus and method for modifying the operation of a robotic vehicle in a real environment, to emulate the operation of the robotic vehicle operating in a mixed reality environment

    DOEpatents

    Garretson, Justin R [Albuquerque, NM; Parker, Eric P [Albuquerque, NM; Gladwell, T Scott [Albuquerque, NM; Rigdon, J Brian [Edgewood, NM; Oppel, III, Fred J.

    2012-05-29

    Apparatus and methods for modifying the operation of a robotic vehicle in a real environment to emulate the operation of the robotic vehicle in a mixed reality environment include a vehicle sensing system having a communications module attached to the robotic vehicle for communicating operating parameters related to the robotic vehicle in a real environment to a simulation controller for simulating the operation of the robotic vehicle in a mixed (live, virtual and constructive) environment wherein the affects of virtual and constructive entities on the operation of the robotic vehicle (and vice versa) are simulated. These effects are communicated to the vehicle sensing system which generates a modified control command for the robotic vehicle including the effects of virtual and constructive entities, causing the robot in the real environment to behave as if virtual and constructive entities existed in the real environment.

  12. An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant

    NASA Astrophysics Data System (ADS)

    Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter

    2017-09-01

    One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.

  13. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    PubMed

    Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  14. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

    PubMed Central

    Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366

  15. EEG theta and Mu oscillations during perception of human and robot actions

    PubMed Central

    Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.

    2013-01-01

    The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375

  16. EEG theta and Mu oscillations during perception of human and robot actions.

    PubMed

    Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P

    2013-01-01

    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

  17. 2017 Global Explosive Ordnance Disposal (EOD) Symposium and Exhibition. Held in North Bethesda, MD on 8-9 August 2017.

    DTIC Science & Technology

    2017-08-09

    Commander, Israeli National Police Bomb Squad, Senior CIED Analyst & Author, Mobius Reports 9:00 AM - 6:30 PM Exhibit Hall Open Salons A-E 9:30 AM...Operation Inherent Resolve • COL Frank Davis, USA, Commander, 71st EOD Group 9:00 AM - 9:45 AM Belgium Bombing of 22 March 2016 Briefing • Commander...SYNEXXUS 201 United States Bomb Technician Association 202 55th Ordnance Company (EOD) 203 RE2 Robotics 204 W.S. Darley & Company 207 Roboteam Inc. 210

  18. Weintek interfaces for controlling the position of a robotic arm

    NASA Astrophysics Data System (ADS)

    Barz, C.; Ilia, M.; Ilut, T.; Pop-Vadean, A.; Pop, P. P.; Dragan, F.

    2016-08-01

    The paper presents the use of Weintek panels to control the position of a robotic arm, operated step by step on the three motor axes. PLC control interface is designed with a Weintek touch screen. The HMI Weintek eMT3070a is the user interface in the process command of the PLC. This HMI controls the local PLC, entering the coordinate on the axes X, Y and Z. The subject allows the development in a virtual environment for e-learning and monitoring the robotic arm actions.

  19. ISS Expedition 18 Sandra Magnus at Robotics Work Station (RWS)

    NASA Image and Video Library

    2008-12-05

    ISS018-E-010555 (5 Dec. 2008) --- Astronaut Sandra Magnus, Expedition 18 flight engineer, operates the Canadarm2 from the robotics work station in the Destiny laboratory of the International Space Station. Using the station's robotic arm, Magnus and astronaut Michael Fincke (out of frame), commander, relocated the ESP-3 from the Mobile Base System back to the Cargo Carrier Attachment System on the P3 truss. The ESP-3 spare parts platform was temporarily parked on the MBS to clear the path for the spacewalks during STS-126.

  20. ISS Expedition 18 Robotics Work Station (RWS) in the US Laboratory

    NASA Image and Video Library

    2008-12-05

    ISS018-E-010564 (5 Dec. 2008) --- Astronaut Michael Fincke, Expedition 18 commander, uses a computer at the robotics work station in the Destiny laboratory of the International Space Station. Using the station's robotic arm, Fincke and astronaut Sandra Magnus (out of frame), flight engineer, relocated the ESP-3 from the Mobile Base System back to the Cargo Carrier Attachment System on the P3 truss. The ESP-3 spare parts platform was temporarily parked on the MBS to clear the path for the spacewalks during STS-126.

  1. Contact Control, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Sternberg, Alex

    The contact control code is a generalized force control scheme meant to interface with a robotic arm being controlled using the Robot Operating System (ROS). The code allows the user to specify a control scheme for each control dimension in a way that many different control task controllers could be built from the same generalized controller. The input to the code includes maximum velocity, maximum force, maximum displacement, and a control law assigned to each direction and the output is a 6 degree of freedom velocity command that is sent to the robot controller.

  2. Telerobotics: methodology for the development of through-the-Internet robotic teleoperated system

    NASA Astrophysics Data System (ADS)

    Alvares, Alberto J.; Caribe de Carvalho, Guilherme; Romariz, Luiz S. J.; Alfaro, Sadek C. A.

    1999-11-01

    This work presents a methodology for the development of Teleoperated Robotic System through Internet. Initially, it is presented a bibliographical review of the telerobotic systems that uses Internet as way of control. The methodology is implemented and tested through the development of two systems. The first is a manipulator with two degrees of freedom commanded remotely through Internet denominated RobWebCam. The second is a system which teleoperates an ABB (Asea Brown Boveri) Industrial Robot of six degrees of freedom denominated RobWebLink.

  3. Using a cognitive architecture for general purpose service robot control

    NASA Astrophysics Data System (ADS)

    Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo

    2015-04-01

    A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.

  4. Automation Improvements for Synchrotron Based Small Angle Scattering Using an Inexpensive Robotics Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, John P.

    This paper reports on the progress toward creating semi-autonomous motion control platforms for beamline applications using the iRobot Create registered platform. The goal is to create beamline research instrumentation where the motion paths are based on the local environment rather than position commanded from a control system, have low integration costs and also be scalable and easily maintainable.

  5. Robotic Laser Coating Removal System

    DTIC Science & Technology

    2008-07-01

    Materiel Command IRR Internal Rate of Return JTP Joint Test Protocol JTR Joint Test Report LARPS Large Area Robotic Paint Stripping LASER Light...use of laser paint stripping systems is applicable to depainting activities on large off-aircraft components and weapons systems for the Air Force...The use of laser paint stripping systems is applicable to depainting activities on large off-aircraft components and weapons systems for the Air

  6. Modelling of robotic work cells using agent based-approach

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  7. A Symbiotic Brain-Machine Interface through Value-Based Decision Making

    PubMed Central

    Mahmoudi, Babak; Sanchez, Justin C.

    2011-01-01

    Background In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). Methodology The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. Conclusions Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain. PMID:21423797

  8. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  9. Computer coordination of limb motion for a three-legged walking robot

    NASA Technical Reports Server (NTRS)

    Klein, C. A.; Patterson, M. R.

    1980-01-01

    Coordination of the limb motion of a vehicle which could perform assembly and maintenance operations on large structures in space is described. Manipulator kinematics and walking robots are described. The basic control scheme of the robot is described. The control of the individual arms are described. Arm velocities are generally described in Cartesian coordinates. Cartesian velocities are converted to joint velocities using the Jacobian matrix. The calculation of a trajectory for an arm given a sequence of points through which it is to pass is described. The free gait algorithm which controls the lifting and placing of legs for the robot is described. The generation of commanded velocities for the robot, and the implementation of those velocities by the algorithm are discussed. Suggestions for further work in the area of robot legged locomotion are presented.

  10. Autonomy in robots and other agents.

    PubMed

    Smithers, T

    1997-06-01

    The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.

  11. Advanced Technologies for Robotic Exploration Leading to Human Exploration: Results from the SpaceOps 2015 Workshop

    NASA Technical Reports Server (NTRS)

    Lupisella, Mark L.; Mueller, Thomas

    2016-01-01

    This paper will provide a summary and analysis of the SpaceOps 2015 Workshop all-day session on "Advanced Technologies for Robotic Exploration, Leading to Human Exploration", held at Fucino Space Center, Italy on June 12th, 2015. The session was primarily intended to explore how robotic missions and robotics technologies more generally can help lead to human exploration missions. The session included a wide range of presentations that were roughly grouped into (1) broader background, conceptual, and high-level operations concepts presentations such as the International Space Exploration Coordination Group Roadmap, followed by (2) more detailed narrower presentations such as rover autonomy and communications. The broader presentations helped to provide context and specific technical hooks, and helped lay a foundation for the narrower presentations on more specific challenges and technologies, as well as for the discussion that followed. The discussion that followed the presentations touched on key questions, themes, actions and potential international collaboration opportunities. Some of the themes that were touched on were (1) multi-agent systems, (2) decentralized command and control, (3) autonomy, (4) low-latency teleoperations, (5) science operations, (6) communications, (7) technology pull vs. technology push, and (8) the roles and challenges of operations in early human architecture and mission concept formulation. A number of potential action items resulted from the workshop session, including: (1) using CCSDS as a further collaboration mechanism for human mission operations, (2) making further contact with subject matter experts, (3) initiating informal collaborative efforts to allow for rapid and efficient implementation, and (4) exploring how SpaceOps can support collaboration and information exchange with human exploration efforts. This paper will summarize the session and provide an overview of the above subjects as they emerged from the SpaceOps 2015 Workshop session.

  12. A novel Morse code-inspired method for multiclass motor imagery brain-computer interface (BCI) design.

    PubMed

    Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen

    2015-11-01

    Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. On the Utilization of Social Animals as a Model for Social Robotics

    PubMed Central

    Miklósi, Ádám; Gácsi, Márta

    2012-01-01

    Social robotics is a thriving field in building artificial agents. The possibility to construct agents that can engage in meaningful social interaction with humans presents new challenges for engineers. In general, social robotics has been inspired primarily by psychologists with the aim of building human-like robots. Only a small subcategory of “companion robots” (also referred to as robotic pets) was built to mimic animals. In this opinion essay we argue that all social robots should be seen as companions and more conceptual emphasis should be put on the inter-specific interaction between humans and social robots. This view is underlined by the means of an ethological analysis and critical evaluation of present day companion robots. We suggest that human–animal interaction provides a rich source of knowledge for designing social robots that are able to interact with humans under a wide range of conditions. PMID:22457658

  14. Vision-based stabilization of nonholonomic mobile robots by integrating sliding-mode control and adaptive approach

    NASA Astrophysics Data System (ADS)

    Cao, Zhengcai; Yin, Longjie; Fu, Yili

    2013-01-01

    Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.

  15. Dexterity-Enhanced Telerobotic Microsurgery

    NASA Technical Reports Server (NTRS)

    Charles, Steve; Das, Hari; Ohm, Timothy; Boswell, Curtis; Rodriguez, Guillermo; Steele, Robert; Istrate, Dan

    1997-01-01

    The work reported in this paper is the result, of a collaboration between researchers at the Jet Propulsion Laboratory and Steve Charles, MD, a vitreo-retinal surgeon. The Robot Assisted MicroSurgery (RAMS) telerobotic workstation developed at JPL is a prototype of a system that will be completely under the manual control of a surgeon. The system has a slave robot that will hold surgical instruments. The slave robot motions replicate in six degrees of freedom those of tile. surgeon's hand measured using a master input device with a surgical instrument, shaped handle. The surgeon commands motions for the instrument by moving the handle in the desired trajectories. The trajectories are measured, filtered, and scaled down then used to drive the slave robot.

  16. AERCam Autonomy: Intelligent Software Architecture for Robotic Free Flying Nanosatellite Inspection Vehicles

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.

    2006-01-01

    The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.

  17. Autonomous stair-climbing with miniature jumping robots.

    PubMed

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  18. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  19. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots

    PubMed Central

    Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736

  20. A two-class self-paced BCI to control a robot in four directions.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro

    2011-01-01

    In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE

  1. Tank-automotive robotics

    NASA Astrophysics Data System (ADS)

    Lane, Gerald R.

    1999-07-01

    To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.

  2. Application of the HeartLander Crawling Robot for Injection of a Thermally Sensitive Anti-Remodeling Agent for Myocardial Infarction Therapy

    PubMed Central

    Chapman, Michael P.; López González, Jose L.; Goyette, Brina E.; Fujimoto, Kazuro L.; Ma, Zuwei; Wagner, William R.; Zenati, Marco A.; Riviere, Cameron N.

    2011-01-01

    The injection of a mechanical bulking agent into the left ventricular (LV) wall of the heart has shown promise as a therapy for maladaptive remodeling of the myocardium after myocardial infarct (MI). The HeartLander robotic crawler presented itself as an ideal vehicle for minimally-invasive, highly accurate epicardial injection of such an agent. Use of the optimal bulking agent, a thermosetting hydrogel developed by our group, presents a number of engineering obstacles, including cooling of the miniaturized injection system while the robot is navigating in the warm environment of a living patient. We present herein a demonstration of an integrated miniature cooling and injection system in the HeartLander crawling robot, that is fully biocompatible and capable of multiple injections of a thermosetting hydrogel into dense animal tissue while the entire system is immersed in a 37°C water bath. PMID:21096276

  3. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  4. Electrical power technology for robotic planetary rovers

    NASA Technical Reports Server (NTRS)

    Bankston, C. P.; Shirbacheh, M.; Bents, D. J.; Bozek, J. M.

    1993-01-01

    Power technologies which will enable a range of robotic rover vehicle missions by the end of the 1990s and beyond are discussed. The electrical power system is the most critical system for reliability and life, since all other on board functions (mobility, navigation, command and data, communications, and the scientific payload instruments) require electrical power. The following are discussed: power generation, energy storage, power management and distribution, and thermal management.

  5. Caregivers' requirements for in-home robotic agent for supporting community-living elderly subjects with cognitive impairment.

    PubMed

    Faucounau, Véronique; Wu, Ya-Huei; Boulay, Mélodie; Maestrutti, Marina; Rigaud, Anne-Sophie

    2009-01-01

    Older people are an important and growing sector of the population. This demographic change raises the profile of frailty and disability within the world's population. In such conditions, many old people need aides to perform daily activities. Most of the support is given by family members who are now a new target in the therapeutic approach. With advances in technology, robotics becomes increasingly important as a means of supporting older people at home. In order to ensure appropriate technology, 30 caregivers filled out a self-administered questionnaire including questions on needs to support their proxy and requirements concerning the robotic agent's functions and modes of action. This paper points out the functions to be integrated into the robot in order to support caregivers in the care of their proxy. The results also show that caregivers have a positive attitude towards robotic agents.

  6. Phoenix Telemetry Processor

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice

    2013-01-01

    Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.

  7. A decade of telerobotics in rehabilitation: Demonstrated utility blocked by the high cost of manipulation and the complexity of the user interface

    NASA Technical Reports Server (NTRS)

    Leifer, Larry; Michalowski, Stefan; Vanderloos, Machiel

    1991-01-01

    The Stanford/VA Interactive Robotics Laboratory set out in 1978 to test the hypothesis that industrial robotics technology could be applied to serve the manipulation needs of severely impaired individuals. Five generations of hardware, three generations of system software, and over 125 experimental subjects later, we believe that genuine utility is achievable. The experience includes development of over 65 task applications using voiced command, joystick control, natural language command and 3D object designation technology. A brief foray into virtual environments, using flight simulator technology, was instructive. If reality and virtuality come for comparable prices, you cannot beat reality. A detailed review of assistive robot anatomy and the performance specifications needed to achieve cost/beneficial utility will be used to support discussion of the future of rehabilitation telerobotics. Poised on the threshold of commercial viability, but constrained by the high cost of technically adequate manipulators, this worthy application domain flounders temporarily. In the long run, it will be the user interface that governs utility.

  8. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    NASA Astrophysics Data System (ADS)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  9. International Space Station (ISS)

    NASA Image and Video Library

    2002-06-05

    Aboard the Space Shuttle Orbiter Endeavour, the STS-111 mission was launched on June 5, 2002 at 5:22 pm EDT from Kennedy's launch pad. On board were the STS-111 and Expedition Five crew members. Astronauts Kenneth D. Cockrell, commander; Paul S. Lockhart, pilot, and mission specialists Franklin R. Chang-Diaz and Philippe Perrin were the STS-111 crew members. Expedition Five crew members included Cosmonaut Valeri G. Korzun, commander, Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. Three space walks enabled the STS-111 crew to accomplish mission objectives: the delivery and installation of a new platform for the ISS robotic arm, the Mobile Base System (MBS) which is an important part of the Station's Mobile Servicing System allowing the robotic arm to travel the length of the Station; the replacement of a wrist roll joint on the Station's robotic arm; and unloading supplies and science experiments from the Leonardo Multi-Purpose Logistics Module, which made its third trip to the orbital outpost. Landing on June 19, 2002, the 14-day STS-111 mission was the 14th Shuttle mission to visit the ISS.

  10. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  11. RAPID: Collaborative Commanding and Monitoring of Lunar Assets

    NASA Technical Reports Server (NTRS)

    Torres, Recaredo J.; Mittman, David S.; Powell, Mark W.; Norris, Jeffrey S.; Joswig, Joseph C.; Crockett, Thomas M.; Abramyan, Lucy; Shams, Khawaja S.; Wallick, Michael; Allan, Mark; hide

    2011-01-01

    RAPID (Robot Application Programming Interface Delegate) software utilizes highly robust technology to facilitate commanding and monitoring of lunar assets. RAPID provides the ability for intercenter communication, since these assets are developed in multiple NASA centers. RAPID is targeted at the task of lunar operations; specifically, operations that deal with robotic assets, cranes, and astronaut spacesuits, often developed at different NASA centers. RAPID allows for a uniform way to command and monitor these assets. Commands can be issued to take images, and monitoring is done via telemetry data from the asset. There are two unique features to RAPID: First, it allows any operator from any NASA center to control any NASA lunar asset, regardless of location. Second, by abstracting the native language for specific assets to a common set of messages, an operator may control and monitor any NASA lunar asset by being trained only on the use of RAPID, rather than the specific asset. RAPID is easier to use and more powerful than its predecessor, the Astronaut Interface Device (AID). Utilizing the new robust middleware, DDS (Data Distribution System), developing in RAPID has increased significantly over the old middleware. The API is built upon the Java Eclipse Platform, which combined with DDS, provides platform-independent software architecture, simplifying development of RAPID components. As RAPID continues to evolve and new messages are being designed and implemented, operators for future lunar missions will have a rich environment for commanding and monitoring assets.

  12. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  13. Brain-controlled telepresence robot by motor-disabled people.

    PubMed

    Tonin, Luca; Carlson, Tom; Leeb, Robert; del R Millán, José

    2011-01-01

    In this paper we present the first results of users with disabilities in mentally controlling a telepresence robot, a rather complex task as the robot is continuously moving and the user must control it for a long period of time (over 6 minutes) to go along the whole path. These two users drove the telepresence robot from their clinic more than 100 km away. Remarkably, although the patients had never visited the location where the telepresence robot was operating, they achieve similar performances to a group of four healthy users who were familiar with the environment. In particular, the experimental results reported in this paper demonstrate the benefits of shared control for brain-controlled telepresence robots. It allows all subjects (including novel BMI subjects as our users with disabilities) to complete a complex task in similar time and with similar number of commands to those required by manual control.

  14. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  15. Method and associated apparatus for capturing, servicing, and de-orbiting earth satellites using robotics

    NASA Technical Reports Server (NTRS)

    Cepollina, Frank J. (Inventor); Corbo, James E. (Inventor); Burns, Richard D. (Inventor); Jedhrich, Nicholas M. (Inventor); Holz, Jill M. (Inventor)

    2009-01-01

    This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR, and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time as well as the ability to intervene using manual override to teleoperate the robot.

  16. Robot Teleoperation and Perception Assistance with a Virtual Holographic Display

    NASA Technical Reports Server (NTRS)

    Goddard, Charles O.

    2012-01-01

    Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.

  17. Interactive robot control system and method of use

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)

    2012-01-01

    A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.

  18. Workspace Safe Operation of a Force- or Impedance-Controlled Robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Strawser, Philip A. (Inventor); Yamokoski, John D. (Inventor)

    2013-01-01

    A method of controlling a robotic manipulator of a force- or impedance-controlled robot within an unstructured workspace includes imposing a saturation limit on a static force applied by the manipulator to its surrounding environment, and may include determining a contact force between the manipulator and an object in the unstructured workspace, and executing a dynamic reflex when the contact force exceeds a threshold to thereby alleviate an inertial impulse not addressed by the saturation limited static force. The method may include calculating a required reflex torque to be imparted by a joint actuator to a robotic joint. A robotic system includes a robotic manipulator having an unstructured workspace and a controller that is electrically connected to the manipulator, and which controls the manipulator using force- or impedance-based commands. The controller, which is also disclosed herein, automatically imposes the saturation limit and may execute the dynamic reflex noted above.

  19. iss050e059529

    NASA Image and Video Library

    2017-03-24

    iss050e059529 (03/24/2017) --- Flight Engineer Thomas Pesquet of ESA (European Space Agency) is seen performing maintenance on the Dextre robot during a spacewalk. Pesquet and Expedition 50 Commander Shane Kimbrough of NASA conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  20. iss050e059608

    NASA Image and Video Library

    2017-03-24

    iss050e059608 (03/24/2017) --- NASA astronaut Peggy Whitson controls the robotic arm aboard the International Space Station during a spacewalk. Expedition 50 Commander Shane Kimbrough of NASA and Flight Engineer Thomas Pesquet of ESA (European Space Agency) conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  1. Speech and gesture interfaces for squad-level human-robot teaming

    NASA Astrophysics Data System (ADS)

    Harris, Jonathan; Barber, Daniel

    2014-06-01

    As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.

  2. The problem with multiple robots

    NASA Technical Reports Server (NTRS)

    Huber, Marcus J.; Kenny, Patrick G.

    1994-01-01

    The issues that can arise in research associated with multiple, robotic agents are discussed. Two particular multi-robot projects are presented as examples. This paper was written in the hope that it might ease the transition from single to multiple robot research.

  3. Physical Scaffolding Accelerates the Evolution of Robot Behavior.

    PubMed

    Buckingham, David; Bongard, Josh

    2017-01-01

    In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.

  4. Controlling Robots with the Mind.

    ERIC Educational Resources Information Center

    Nicolelis, Miguel A. L.; Chapin, John K.

    2002-01-01

    Reports on research that shows that people with nerve or limb injuries may one day be able to command wheelchairs, prosthetics, and even paralyzed arms and legs by "thinking them through" the motions. (Author/MM)

  5. The Aerosonde Robotic Aircraft: A New Paradigm for Environmental Observations.

    NASA Astrophysics Data System (ADS)

    Holland, G. J.; Webster, P. J.; Curry, J. A.; Tyrell, G.; Gauntlett, D.; Brett, G.; Becker, J.; Hoag, R.; Vaglienti, W.

    2001-05-01

    The Aerosonde is a small robotic aircraft designed for highly flexible and inexpensive operations. Missions are conducted in a completely robotic mode, with the aircraft under the command of a ground controller who monitors the mission. Here we provide an update on the Aerosonde development and operations and expand on the vision for the future, including instrument payloads, observational strategies, and platform capabilities. The aircraft was conceived in 1992 and developed to operational status in 1995-98, after a period of early prototyping. Continuing field operations and development since 1998 have led to the Aerosonde Mark 3, with ~2000 flight hours completed. A defined development path through to 2002 will enable the aircraft to become increasingly more robust with increased flexibility in the range and type of operations that can be achieved. An Aerosonde global reconnaissance facility is being developed that consists of launch and recovery sites dispersed around the globe. The use of satellite communications and internet technology enables an operation in which all aircraft around the globe are under the command of a single center. During operation, users will receive data at their home institution in near-real time via the virtual field environment, allowing the user to update the mission through interaction with the global command center. Sophisticated applications of the Aerosonde will be enabled by the development of a variety of interchangeable instrument payloads and the operation of Smart Aerosonde Clusters that allow a cluster of Aerosondes to interact intelligently in response to the data being collected.

  6. Dragon Spacecraft grappled by SSRMS

    NASA Image and Video Library

    2015-04-17

    ISS043E122264 (04/17/2015) --- The Canadarm 2 reaches out to grapple the SpaceX Dragon cargo spacecraft and prepare it to be pulled into its port on the International Space Station. Robotics officers at Mission Control, in the Johnson Space Center Houston Texas will command the Canadarm2 robotic arm to maneuver Dragon to its installation position at the Earth-facing port of the Harmony module where it will reside for the next five weeks.

  7. Dragon crew shots

    NASA Image and Video Library

    2012-10-10

    ISS033-E-011279 (10 Oct. 2012) --- NASA astronaut Sunita Williams, Expedition 33 commander; and Japan Aerospace Exploration Agency astronaut Aki Hoshide, flight engineer, work the controls at the robotics workstation in the International Space Station’s seven-windowed Cupola during the rendezvous and berthing of the SpaceX Dragon commercial cargo craft. Using the Canadarm2 robotic arm, Williams and Hoshide captured and berthed Dragon to the Earth-facing side of the Harmony node Oct. 10, 2012.

  8. Neural-Network Control Of Prosthetic And Robotic Hands

    NASA Technical Reports Server (NTRS)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  9. Defining Soldier Intent in a Human-Robot Natural Language Interaction Context

    DTIC Science & Technology

    2017-10-01

    this burden on the human and expand the scope of human–robot operations, this project investigates fundamental research issues in the autonomous...attempted to devise a quantitative metric for the Shared Interpretation of Commander’s Intent (SICI). The authors’ background research indicated that...Another interesting set of results were the cases where the battalion and company commanders disagreed on the meaning of key terms, such as “delay”, which

  10. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  11. Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot

    DTIC Science & Technology

    2015-04-01

    Ave, Cambridge, MA USA 02139; bSpace and Naval Warfare (SPAWAR) Systems Center Pacific, San Diego, CA USA 92152 ABSTRACT A novel, multi-segmented...high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the...to be demonstrated.14 The Unmanned Systems Group at SPAWAR Systems Center Pacific has developed a multi-segment magnetic crawler robot (MSMR

  12. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

    PubMed Central

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-01-01

    Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777

  13. BGen: A UML Behavior Network Generator Tool

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Reder, Leonard J.; Balian, Harry

    2010-01-01

    BGen software was designed for autogeneration of code based on a graphical representation of a behavior network used for controlling automatic vehicles. A common format used for describing a behavior network, such as that used in the JPL-developed behavior-based control system, CARACaS ["Control Architecture for Robotic Agent Command and Sensing" (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40] includes a graph with sensory inputs flowing through the behaviors in order to generate the signals for the actuators that drive and steer the vehicle. A computer program to translate Unified Modeling Language (UML) Freeform Implementation Diagrams into a legacy C implementation of Behavior Network has been developed in order to simplify the development of C-code for behavior-based control systems. UML is a popular standard developed by the Object Management Group (OMG) to model software architectures graphically. The C implementation of a Behavior Network is functioning as a decision tree.

  14. Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance

    NASA Technical Reports Server (NTRS)

    Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John

    2003-01-01

    We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.

  15. Enhancing the effectiveness of human-robot teaming with a closed-loop system.

    PubMed

    Teo, Grace; Reinerman-Jones, Lauren; Matthews, Gerald; Szalma, James; Jentsch, Florian; Hancock, Peter

    2018-02-01

    With technological developments in robotics and their increasing deployment, human-robot teams are set to be a mainstay in the future. To develop robots that possess teaming capabilities, such as being able to communicate implicitly, the present study implemented a closed-loop system. This system enabled the robot to provide adaptive aid without the need for explicit commands from the human teammate, through the use of multiple physiological workload measures. Such measures of workload vary in sensitivity and there is large inter-individual variability in physiological responses to imposed taskload. Workload models enacted via closed-loop system should accommodate such individual variability. The present research investigated the effects of the adaptive robot aid vs. imposed aid on performance and workload. Results showed that adaptive robot aid driven by an individualized workload model for physiological response resulted in greater improvements in performance compared to aid that was simply imposed by the system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Robotics development for the enhancement of space endeavors

    NASA Astrophysics Data System (ADS)

    Mauceri, A. J.; Clarke, Margaret M.

    Telerobotics and robotics development activities to support NASA's goal of increasing opportunities in space commercialization and exploration are described. The Rockwell International activities center is using robotics to improve efficiency and safety in three related areas: remote control of autonomous systems, automated nondestructive evaluation of aspects of vehicle integrity, and the use of robotics in space vehicle ground reprocessing operations. In the first area, autonomous robotic control, Rockwell is using the control architecture, NASREM, as the foundation for the high level command of robotic tasks. In the second area, we have demonstrated the use of nondestructive evaluation (using acoustic excitation and lasers sensors) to evaluate the integrity of space vehicle surface material bonds, using Orbiter 102 as the test case. In the third area, Rockwell is building an automated version of the present manual tool used for Space Shuttle surface tile re-waterproofing. The tool will be integrated into an orbiter processing robot being developed by a KSC-led team.

  17. Towards the Verification of Human-Robot Teams

    NASA Technical Reports Server (NTRS)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  18. Science Autonomy in Robotic Exploration

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; DeVincenzi, Donald (Technical Monitor)

    2001-01-01

    Historical mission operations have involved: (1) return of scientific data; (2) evaluation of these data by scientists; (3) recommendations for future mission activity by scientists; (4) commands for these transmitted to the craft; and (5) the activity being undertaken. This cycle is repeated throughout the mission with command opportunities once or twice per day. For a rover, this historical cycle is not amenable to rapid long range traverses or rapid response to any novel or unexpected situations. In addition to real-time response issues, imaging and/or spectroscopic devices can produce tremendous data volumes during a traverse. However, such data volumes can rapidly exceed on-board memory capabilities prior to the ability to transmit it to Earth. Additionally, the necessary communication band-widths are restrictive enough so that only a small portion of these data can actually be returned to Earth. Such scenarios suggest enabling some science decisions to be made on-board the robots. These decisions involve automating various aspects of scientific discovery instead of the electromechanical control, health, and navigation issues associated with robotic operations. The robot retains access to the full data fidelity obtained by its scientific sensors, and is in the best position to implement actions based upon these data. Such an approach would eventually enable the robot to alter observations and assure only the highest quality data is obtained for analysis. Additionally, the robot can begin to understand what is scientifically interesting and implement alternative observing sequences, because the observed data deviate from expectations based upon current theories/models of planetary processes. Such interesting data and/or conclusions can then be prioritized and selectively transmitted to Earth; reducing memory and communications demands. Results of Ames' current work in this area will be presented.

  19. Robotic Exploration: The Role of Science Autonomy

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; DeVincenzi, Donald (Technical Monitor)

    2001-01-01

    Historical mission operations have involved: (1) return of scientific data; (2) evaluation of these data by scientists; (3) recommendations for future mission activity by scientists; (4) commands for these transmitted to the craft; and (5) the activity being, undertaken. This cycle is repeated throughout the mission with command opportunities once or twice per day. For a rover, this historical cycle is not amenable to rapid long range traverses or rapid response to any novel or unexpected situations. In addition to real-time response issues, imaging and/or spectroscopic devices can produce tremendous data volumes during a traverse. However, such data volumes can rapidly exceed on-board memory capabilities prior to the ability to transmit it to Earth. Additionally, the necessary communication band-widths are restrictive enough so that only a small portion of these data can actually be returned to Earth. Such scenarios suggest enabling some science decisions to be made on-board the robots. These decisions involve automating various aspects of scientific discovery instead of the electromechanical control, health, and navigation issues associated with robotic operations. The robot retains access to the full data fidelity obtained by its scientific sensors, and is in the best position to implement actions based upon these data. Such an approach would eventually enable the robot to alter observations and assure only the highest quality data is obtained for analysis. Additionally, the robot can begin to understand what is scientifically interesting and implement alternative observing sequences, because the observed data deviate from expectations based upon current theories/models of planetary processes. Such interesting data and/or conclusions can then be prioritized and selectively transmitted to Earth; reducing memory and communications demands. Results of Ames' current work in this area will be presented.

  20. Plan execution monitoring with distributed intelligent agents for battle command

    NASA Astrophysics Data System (ADS)

    Allen, James P.; Barry, Kevin P.; McCormick, John M.; Paul, Ross A.

    2004-07-01

    As military tactics evolve toward execution centric operations the ability to analyze vast amounts of mission relevant data is essential to command and control decision making. To maintain operational tempo and achieve information superiority we have developed Vigilant Advisor, a mobile agent-based distributed Plan Execution Monitoring system. It provides military commanders with continuous contingency monitoring tailored to their preferences while overcoming the network bandwidth problem often associated with traditional remote data querying. This paper presents an overview of Plan Execution Monitoring as well as a detailed view of the Vigilant Advisor system including key features and statistical analysis of resource savings provided by its mobile agent-based approach.

  1. Model-free learning on robot kinematic chains using a nested multi-agent topology

    NASA Astrophysics Data System (ADS)

    Karigiannis, John N.; Tzafestas, Costas S.

    2016-11-01

    This paper proposes a model-free learning scheme for the developmental acquisition of robot kinematic control and dexterous manipulation skills. The approach is based on a nested-hierarchical multi-agent architecture that intuitively encapsulates the topology of robot kinematic chains, where the activity of each independent degree-of-freedom (DOF) is finally mapped onto a distinct agent. Each one of those agents progressively evolves a local kinematic control strategy in a game-theoretic sense, that is, based on a partial (local) view of the whole system topology, which is incrementally updated through a recursive communication process according to the nested-hierarchical topology. Learning is thus approached not through demonstration and training but through an autonomous self-exploration process. A fuzzy reinforcement learning scheme is employed within each agent to enable efficient exploration in a continuous state-action domain. This paper constitutes in fact a proof of concept, demonstrating that global dexterous manipulation skills can indeed evolve through such a distributed iterative learning of local agent sensorimotor mappings. The main motivation behind the development of such an incremental multi-agent topology is to enhance system modularity, to facilitate extensibility to more complex problem domains and to improve robustness with respect to structural variations including unpredictable internal failures. These attributes of the proposed system are assessed in this paper through numerical experiments in different robot manipulation task scenarios, involving both single and multi-robot kinematic chains. The generalisation capacity of the learning scheme is experimentally assessed and robustness properties of the multi-agent system are also evaluated with respect to unpredictable variations in the kinematic topology. Furthermore, these numerical experiments demonstrate the scalability properties of the proposed nested-hierarchical architecture, where new agents can be recursively added in the hierarchy to encapsulate individual active DOFs. The results presented in this paper demonstrate the feasibility of such a distributed multi-agent control framework, showing that the solutions which emerge are plausible and near-optimal. Numerical efficiency and computational cost issues are also discussed.

  2. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    NASA Technical Reports Server (NTRS)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  3. New robotics: design principles for intelligent systems.

    PubMed

    Pfeifer, Rolf; Iida, Fumiya; Bongard, Josh

    2005-01-01

    New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e. g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only "nice to have" but is in fact a necessary tool for designing embodied agents.

  4. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots.

    PubMed

    Li, Xin; Bilbao, Sonia; Martín-Wanton, Tamara; Bastos, Joaquim; Rodriguez, Jonathan

    2017-03-11

    In order to facilitate cooperation between underwater robots, it is a must for robots to exchange information with unambiguous meaning. However, heterogeneity, existing in information pertaining to different robots, is a major obstruction. Therefore, this paper presents a networked ontology, named the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) ontology, to address information heterogeneity and enable robots to have the same understanding of exchanged information. The SWARMs ontology uses a core ontology to interrelate a set of domain-specific ontologies, including the mission and planning, the robotic vehicle, the communication and networking, and the environment recognition and sensing ontology. In addition, the SWARMs ontology utilizes ontology constructs defined in the PR-OWL ontology to annotate context uncertainty based on the Multi-Entity Bayesian Network (MEBN) theory. Thus, the SWARMs ontology can provide both a formal specification for information that is necessarily exchanged between robots and a command and control entity, and also support for uncertainty reasoning. A scenario on chemical pollution monitoring is described and used to showcase how the SWARMs ontology can be instantiated, be extended, represent context uncertainty, and support uncertainty reasoning.

  5. STS-109 Crew Training

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Footage shows the crew of STS-109 (Commander Scott Altman, Pilot Duane Carey, Payload Commander John Grunsfeld, and Mission Specialists Nancy Currie, James Newman, Richard Linnehan, and Michael Massimino) during various parts of their training. Scenes show the crew's photo session, Post Landing Egress practice, training in Dome Simulator, Extravehicular Activity Training in the Neutral Buoyancy Laboratory (NBL), and using the Virtual Reality Laboratory Robotic Arm. The crew is also seen tasting food as they choose their menus for on-orbit meals.

  6. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  7. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.

  8. iss053e156180

    NASA Image and Video Library

    2017-11-09

    iss053e156180 (Nov. 9, 2017) --- Expedition 53 Commander Randy Bresnik (foreground) and Flight Engineer Paolo Nespoli are at the controls of the robotics workstation in the Destiny laboratory module training for the approach, rendezvous and grapple of the Orbital ATK Cygnus resupply ship. Both astronauts were in the cupola operating the Canadarm2 robotic arm to grapple Cygnus when it arrived Nov. 14, 2017, delivering nearly 7,400 pounds of crew supplies, science experiments, computer gear, vehicle equipment and spacewalk hardware.

  9. iss053e156160

    NASA Image and Video Library

    2017-11-09

    iss053e156160 (Nov. 9, 2017) --- Expedition 53 Commander Randy Bresnik is at the controls of the robotics workstation in the Destiny laboratory module training for the approach, rendezvous and grapple of the Orbital ATK Cygnus resupply ship. He and Flight Engineer Paolo Nespoli were in the cupola operating the Canadarm2 robotic arm to grapple Cygnus when it arrived Nov. 14, 2017, delivering nearly 7,400 pounds of crew supplies, science experiments, computer gear, vehicle equipment and spacewalk hardware.

  10. 1200737

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  11. 1200739

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  12. 1200738

    NASA Image and Video Library

    2012-08-21

    FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION

  13. Biobotic insect swarm based sensor networks for search and rescue

    NASA Astrophysics Data System (ADS)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  14. Design, Kinematic Optimization, and Evaluation of a Teleoperated System for Middle Ear Microsurgery

    PubMed Central

    Miroir, Mathieu; Nguyen, Yann; Szewczyk, Jérôme; Sterkers, Olivier; Bozorg Grayeli, Alexis

    2012-01-01

    Middle ear surgery involves the smallest and the most fragile bones of the human body. Since microsurgical gestures and a submillimetric precision are required in these procedures, the outcome can be potentially improved by robotic assistance. Today, there is no commercially available device in this field. Here, we describe a method to design a teleoperated assistance robotic system dedicated to the middle ear surgery. Determination of design specifications, the kinematic structure, and its optimization are detailed. The robot-surgeon interface and the command modes are provided. Finally, the system is evaluated by realistic tasks in experimental dedicated settings and in human temporal bone specimens. PMID:22927789

  15. Adjustable impedance, force feedback and command language aids for telerobotics (parts 1-4 of an 8-part MIT progress report)

    NASA Technical Reports Server (NTRS)

    Sheridan, Thomas B.; Raju, G. Jagganath; Buzan, Forrest T.; Yared, Wael; Park, Jong

    1989-01-01

    Projects recently completed or in progress at MIT Man-Machine Systems Laboratory are summarized. (1) A 2-part impedance network model of a single degree of freedom remote manipulation system is presented in which a human operator at the master port interacts with a task object at the slave port in a remote location is presented. (2) The extension of the predictor concept to include force feedback and dynamic modeling of the manipulator and the environment is addressed. (3) A system was constructed to infer intent from the operator's commands and the teleoperation context, and generalize this information to interpret future commands. (4) A command language system is being designed that is robust, easy to learn, and has more natural man-machine communication. A general telerobot problem selected as an important command language context is finding a collision-free path for a robot.

  16. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  17. Improved Collision-Detection Method for Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Leger, Chris

    2003-01-01

    An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.

  18. Friendship with a robot: Children's perception of similarity between a robot's physical and virtual embodiment that supports diabetes self-management.

    PubMed

    Sinoo, Claudia; van der Pal, Sylvia; Blanson Henkemans, Olivier A; Keizer, Anouk; Bierman, Bert P B; Looije, Rosemarijn; Neerincx, Mark A

    2018-07-01

    The PAL project develops a conversational agent with a physical (robot) and virtual (avatar) embodiment to support diabetes self-management of children ubiquitously. This paper assesses 1) the effect of perceived similarity between robot and avatar on children's' friendship towards the avatar, and 2) the effect of this friendship on usability of a self-management application containing the avatar (a) and children's motivation to play with it (b). During a four-day diabetes camp in the Netherlands, 21 children participated in interactions with both agent embodiments. Questionnaires measured perceived similarity, friendship, motivation to play with the app and its usability. Children felt stronger friendship towards the physical robot than towards the avatar. The more children perceived the robot and its avatar as the same agency, the stronger their friendship with the avatar was. The stronger their friendship with the avatar, the more they were motivated to play with the app and the higher the app scored on usability. The combination of physical and virtual embodiments seems to provide a unique opportunity for building ubiquitous long-term child-agent friendships. an avatar complementing a physical robot in health care could increase children's motivation and adherence to use self-management support systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Nonuniform Deployment of Autonomous Agents in Harbor-Like Environments

    DTIC Science & Technology

    2014-11-12

    ith agent than to all other agents. Interested readers are referred to [55] for the comprehensive study on Voronoi partitioning and its applications...robots: An rfid approach, PhD dissertation, School of Electrical Engi- neering and Computer Science, University of Ottawa (October 2012). [55] A. Okabe, B...Gueaieb, A stochastic approach of mobile robot navigation using customized rfid sys- tems, International Conference on Signals, Circuits and Systems

  20. Designing minimal space telerobotics systems for maximum performance

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.; Steele, Robert D.

    1992-01-01

    The design of the remote site of a local-remote telerobot control system is described which addresses the constraints of limited computational power available at the remote site control system while providing a large range of control capabilities. The Modular Telerobot Task Execution System (MOTES) provides supervised autonomous control, shared control and teleoperation for a redundant manipulator. The system is capable of nominal task execution as well as monitoring and reflex motion. The MOTES system is minimized while providing a large capability by limiting its functionality to only that which is necessary at the remote site and by utilizing a unified multi-sensor based impedance control scheme. A command interpreter similar to one used on robotic spacecraft is used to interpret commands received from the local site. The system is written in Ada and runs in a VME environment on 68020 processors and initially controls a Robotics Research K1207 7 degree of freedom manipulator.

  1. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  2. Unmanned ground vehicles for integrated force protection

    NASA Astrophysics Data System (ADS)

    Carroll, Daniel M.; Mikell, Kenneth; Denewiler, Thomas

    2004-09-01

    The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.

  3. Stability control for high speed tracked unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Pape, Olivier; Morillon, Joel G.; Houbloup, Philippe; Leveque, Stephane; Fialaire, Cecile; Gauthier, Thierry; Ropars, Patrice

    2005-05-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational add-on value". The paper details the "automatic speed adjustment" behavior (named SYR4), developed by Giat Industries Company, which main goal is to secure the teleoperated mobility of high speed tracked vehicles on rough grounds; more precisely, the validated low level behavior continuously adjusts the vehicle speed taking into account the teleperator wish AND the maximum speed that the vehicle can manage safely according to the commanded radius of curvature. The algorithm is based on a realistic physical model of the ground-tracks relation, taking into account many vehicle and ground parameters (such as ground adherence and dynamic specificities of tracked vehicles). It also deals with the teleoperator-machine interface, providing a balanced strategy between both extreme behaviors: a) maximum speed reduction before initiating the commanded curve; b) executing the minimum possible radius without decreasing the commanded speed. The paper presents the results got from the military acceptance tests performed on tracked SYRANO vehicle (French Operational Demonstrator).

  4. Human-Centered Design and Evaluation of Haptic Cueing for Teleoperation of Multiple Mobile Robots.

    PubMed

    Son, Hyoung Il; Franchi, Antonio; Chuang, Lewis L; Kim, Junsuk; Bulthoff, Heinrich H; Giordano, Paolo Robuffo

    2013-04-01

    In this paper, we investigate the effect of haptic cueing on a human operator's performance in the field of bilateral teleoperation of multiple mobile robots, particularly multiple unmanned aerial vehicles (UAVs). Two aspects of human performance are deemed important in this area, namely, the maneuverability of mobile robots and the perceptual sensitivity of the remote environment. We introduce metrics that allow us to address these aspects in two psychophysical studies, which are reported here. Three fundamental haptic cue types were evaluated. The Force cue conveys information on the proximity of the commanded trajectory to obstacles in the remote environment. The Velocity cue represents the mismatch between the commanded and actual velocities of the UAVs and can implicitly provide a rich amount of information regarding the actual behavior of the UAVs. Finally, the Velocity+Force cue is a linear combination of the two. Our experimental results show that, while maneuverability is best supported by the Force cue feedback, perceptual sensitivity is best served by the Velocity cue feedback. In addition, we show that large gains in the haptic feedbacks do not always guarantee an enhancement in the teleoperator's performance.

  5. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  6. Ground Simulation of an Autonomous Satellite Rendezvous and Tracking System Using Dual Robotic Systems

    NASA Technical Reports Server (NTRS)

    Trube, Matthew J.; Hyslop, Andrew M.; Carignan, Craig R.; Easley, Joseph W.

    2012-01-01

    A hardware-in-the-loop ground system was developed for simulating a robotic servicer spacecraft tracking a target satellite at short range. A relative navigation sensor package "Argon" is mounted on the end-effector of a Fanuc 430 manipulator, which functions as the base platform of the robotic spacecraft servicer. Machine vision algorithms estimate the pose of the target spacecraft, mounted on a Rotopod R-2000 platform, relay the solution to a simulation of the servicer spacecraft running in "Freespace", which performs guidance, navigation and control functions, integrates dynamics, and issues motion commands to a Fanuc platform controller so that it tracks the simulated servicer spacecraft. Results will be reviewed for several satellite motion scenarios at different ranges. Key words: robotics, satellite, servicing, guidance, navigation, tracking, control, docking.

  7. Center of excellence for small robots

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Carroll, Daniel M.; Laird, Robin T.; Everett, H. R.

    2005-05-01

    The mission of the Unmanned Systems Branch of SPAWAR Systems Center, San Diego (SSC San Diego) is to provide network-integrated robotic solutions for Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications, serving and partnering with industry, academia, and other government agencies. We believe the most important criterion for a successful acquisition program is producing a value-added end product that the warfighter needs, uses and appreciates. Through our accomplishments in the laboratory and field, SSC San Diego has been designated the Center of Excellence for Small Robots by the Office of the Secretary of Defense Joint Robotics Program. This paper covers the background, experience, and collaboration efforts by SSC San Diego to serve as the "Impedance-Matching Transformer" between the robotic user and technical communities. Special attention is given to our Unmanned Systems Technology Imperatives for Research, Development, Testing and Evaluation (RDT&E) of Small Robots. Active projects, past efforts, and architectures are provided as success stories for the Unmanned Systems Development Approach.

  8. Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators

    PubMed Central

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2013-01-01

    Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations. PMID:23928891

  9. Toward a practical mobile robotic aid system for people with severe physical disabilities.

    PubMed

    Regalbuto, M A; Krouskop, T A; Cheatham, J B

    1992-01-01

    A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.

  10. Robotic surgery and hemostatic agents in partial nephrectomy: a high rate of success without vascular clamping.

    PubMed

    Morelli, Luca; Morelli, John; Palmeri, Matteo; D'Isidoro, Cristiano; Kauffmann, Emanuele Federico; Tartaglia, Dario; Caprili, Giovanni; Pisano, Roberta; Guadagni, Simone; Di Franco, Gregorio; Di Candio, Giulio; Mosca, Franco

    2015-09-01

    Robot-assisted partial nephrectomy has been proposed as a technique to overcome technical challenges of laparoscopic partial nephrectomy. We prospectively collected and analyzed data from 31 patients who underwent robotic partial nephrectomy with systematic use of hemostatic agents, between February 2009 and October 2014. Thirty-three renal tumors were treated in 31 patients. There were no conversions to open surgery, intraoperative complications, or blood transfusions. The mean size of the resected tumors was 27 mm (median 20 mm, range 5-40 mm). Twenty-seven of 33 lesions (82%) did not require vascular clamping and therefore were treated in the absence of ischemia. All margins were negative. The high partial nephrectomy success rate without vascular clamping suggests that robotic nephron-sparing surgery with systematic use of hemostatic agents may be a safe, effective method to completely avoid ischemia in the treatment of selected renal masses.

  11. FE Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013708 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  12. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013710 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  13. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013714 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  14. Mastracchio prepares Robonaut for Taskboard Operations

    NASA Image and Video Library

    2013-12-09

    ISS038-E-013712 (9 Dec. 2013) --- In the International Space Station's Destiny laboratory, NASA astronaut Rick Mastracchio, Expedition 38 flight engineer, prepares Robonaut 2 for an upcoming ground-commanded firmware update that will support the installation of a pair of legs for the humanoid robot. R2 was designed to test out the capability of a robot to perform tasks deemed too dangerous or mundane for astronauts. Robonaut's legs are scheduled to arrive to the station aboard the SpaceX-3 commercial cargo mission in February 2014.

  15. A task control architecture for autonomous robots

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Mitchell, Tom

    1990-01-01

    An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.

  16. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031125 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  17. Robonaut 2 performs tests in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031124 (17 Jan. 2013) --- In the International Space Station's Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  18. Robonaut 2 in the U.S. Laboratory

    NASA Image and Video Library

    2013-01-02

    ISS034-E-013990 (2 Jan. 2013) --- In the International Space Station’s Destiny laboratory, Robonaut 2 is pictured during a round of testing for the first humanoid robot in space. Ground teams put Robonaut through its paces as they remotely commanded it to operate valves on a task board. Robonaut is a testbed for exploring new robotic capabilities in space, and its form and dexterity allow it to use the same tools and control panels as its human counterparts do aboard the station.

  19. The Rise of Robots: The Military’s Use of Autonomous Lethal Force

    DTIC Science & Technology

    2015-02-17

    AIR WAR COLLEGE AIR UNIVERSITY THE RISE OF ROBOTS: THE MILITARY’S USE OF AUTONOMOUS LETHAL FORCE by Christopher J. Spinelli, Lt Col...ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air War ...Christopher J. Spinelli is currently an Air War College student and was the former Commander of the 445th Flight Test Squadron at Edwards Air Force Base

  20. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079083 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson, Expedition 40 commander, enters data in a computer in preparation for a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  1. Executive system software design and expert system implementation

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1992-01-01

    The topics are presented in viewgraph form and include: software requirements; design layout of the automated assembly system; menu display for automated composite command; expert system features; complete robot arm state diagram and logic; and expert system benefits.

  2. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  3. Construction of multi-agent mobile robots control system in the problem of persecution with using a modified reinforcement learning method based on neural networks

    NASA Astrophysics Data System (ADS)

    Patkin, M. L.; Rogachev, G. N.

    2018-02-01

    A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.

  4. Cooperative Robot Localization Using Event-Triggered Estimation

    NASA Astrophysics Data System (ADS)

    Iglesias Echevarria, David I.

    It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.

  5. Multi-Agent Diagnosis and Control of an Air Revitalization System for Life Support in Space

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Kowing, Jeffrey; Nieten, Joseph; Graham, Jeffrey s.; Schreckenghost, Debra; Bonasso, Pete; Fleming, Land D.; MacMahon, Matt; Thronesbery, Carroll

    2000-01-01

    An architecture of interoperating agents has been developed to provide control and fault management for advanced life support systems in space. In this adjustable autonomy architecture, software agents coordinate with human agents and provide support in novel fault management situations. This architecture combines the Livingstone model-based mode identification and reconfiguration (MIR) system with the 3T architecture for autonomous flexible command and control. The MIR software agent performs model-based state identification and diagnosis. MIR identifies novel recovery configurations and the set of commands required for the recovery. The AZT procedural executive and the human operator use the diagnoses and recovery recommendations, and provide command sequencing. User interface extensions have been developed to support human monitoring of both AZT and MIR data and activities. This architecture has been demonstrated performing control and fault management for an oxygen production system for air revitalization in space. The software operates in a dynamic simulation testbed.

  6. Simultaneous Deployment and Tracking Multi-Robot Strategies with Connectivity Maintenance

    PubMed Central

    Tardós, Javier; Aragues, Rosario; Sagüés, Carlos; Rubio, Carlos

    2018-01-01

    Multi-robot teams composed of ground and aerial vehicles have gained attention during the last few years. We present a scenario where both types of robots must monitor the same area from different view points. In this paper, we propose two Lloyd-based tracking strategies to allow the ground robots (agents) to follow the aerial ones (targets), keeping the connectivity between the agents. The first strategy establishes density functions on the environment so that the targets acquire more importance than other zones, while the second one iteratively modifies the virtual limits of the working area depending on the positions of the targets. We consider the connectivity maintenance due to the fact that coverage tasks tend to spread the agents as much as possible, which is addressed by restricting their motions so that they keep the links of a minimum spanning tree of the communication graph. We provide a thorough parametric study of the performance of the proposed strategies under several simulated scenarios. In addition, the methods are implemented and tested using realistic robotic simulation environments and real experiments. PMID:29558446

  7. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    PubMed Central

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782

  8. Investigations Into Internal and External Aspects of Dynamic Agent-Environment Couplings

    NASA Astrophysics Data System (ADS)

    Dautenhahn, Kerstin

    This paper originates from my work on `social agents'. An issue which I consider important to this kind of research is the dynamic coupling of an agent with its social and non-social environment. I hypothesize `internal dynamics' inside an agent as a basic step towards understanding. The paper therefore focuses on the internal and external dynamics which couple an agent to its environment. The issue of embodiment in animals and artifacts and its relation to `social dynamics' is discussed first. I argue that embodiment is linked to a concept of a body and is not necessarily given when running a control program on robot hardware. I stress the individual characteristics of an embodied cognitive system, as well as its social embeddedness. I outline the framework of a physical-psychological state space which changes dynamically in a self-modifying way as a holistic approach towards embodied human and artificial cognition. This framework is meant to discuss internal and external dynamics of an embodied, natural or artificial agent. In order to stress the importance of a dynamic memory I introduce the concept of an `autobiographical agent'. The second part of the paper gives an example of the implementation of a physical agent, a robot, which is dynamically coupled to its environment by balancing on a seesaw. For the control of the robot a behavior-oriented approach using the dynamical systems metaphor is used. The problem is studied through building a complete and co-adapted robot-environment system. A seesaw which varies its orientation with one or two degrees of freedom is used as the artificial `habitat'. The problem of stabilizing the body axis by active motion on a seesaw is solved by using two inclination sensors and a parallel, behavior-oriented control architecture. Some experiments are described which demonstrate the exploitation of the dynamics of the robot-environment system.

  9. From path models to commands during additive printing of large-scale architectural designs

    NASA Astrophysics Data System (ADS)

    Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.

    2018-05-01

    The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

  10. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    PubMed

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  11. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  12. iss055e010992

    NASA Image and Video Library

    2018-04-04

    iss055e010992 (April 5, 2018) --- The SpaceX Dragon resupply ship is pictured just moments after Japan Aerospace Exploration Agency astronaut Norishige Kanai commanded the 57.7-foot-long Canadarm2 robotic arm to reach out and capture the commercial space freighter.

  13. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  14. Designing and implementing transparency for real time inspection of autonomous robots

    NASA Astrophysics Data System (ADS)

    Theodorou, Andreas; Wortham, Robert H.; Bryson, Joanna J.

    2017-07-01

    The EPSRC's Principles of Robotics advises the implementation of transparency in robotic systems, however research related to AI transparency is in its infancy. This paper introduces the reader of the importance of having transparent inspection of intelligent agents and provides guidance for good practice when developing such agents. By considering and expanding upon other prominent definitions found in literature, we provide a robust definition of transparency as a mechanism to expose the decision-making of a robot. The paper continues by addressing potential design decisions developers need to consider when designing and developing transparent systems. Finally, we describe our new interactive intelligence editor, designed to visualise, develop and debug real-time intelligence.

  15. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots

    PubMed Central

    Li, Xin; Bilbao, Sonia; Martín-Wanton, Tamara; Bastos, Joaquim; Rodriguez, Jonathan

    2017-01-01

    In order to facilitate cooperation between underwater robots, it is a must for robots to exchange information with unambiguous meaning. However, heterogeneity, existing in information pertaining to different robots, is a major obstruction. Therefore, this paper presents a networked ontology, named the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) ontology, to address information heterogeneity and enable robots to have the same understanding of exchanged information. The SWARMs ontology uses a core ontology to interrelate a set of domain-specific ontologies, including the mission and planning, the robotic vehicle, the communication and networking, and the environment recognition and sensing ontology. In addition, the SWARMs ontology utilizes ontology constructs defined in the PR-OWL ontology to annotate context uncertainty based on the Multi-Entity Bayesian Network (MEBN) theory. Thus, the SWARMs ontology can provide both a formal specification for information that is necessarily exchanged between robots and a command and control entity, and also support for uncertainty reasoning. A scenario on chemical pollution monitoring is described and used to showcase how the SWARMs ontology can be instantiated, be extended, represent context uncertainty, and support uncertainty reasoning. PMID:28287468

  16. Experiments in thrusterless robot locomotion control for space applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Jasper, Warren Joseph

    1990-01-01

    While performing complex assembly tasks or moving about in space, a space robot should minimize the amount of propellant consumed. A study is presented of space robot locomotion and orientation without the use of thrusters. The goal was to design a robot control paradigm that will perform thrusterless locomotion between two points on a structure, and to implement this paradigm on an experimental robot. A two arm free flying robot was constructed which floats on a cushion of air to simulate in 2-D the drag free, zero-g environment of space. The robot can impart momentum to itself by pushing off from an external structure in a coordinated two arm maneuver, and can then reorient itself by activating a momentum wheel. The controller design consists of two parts: a high level strategic controller and a low level dynamic controller. The control paradigm was verified experimentally by commanding the robot to push off from a structure with both arms, rotate 180 degs while translating freely, and then to catch itself on another structure. This method, based on the computed torque, provides a linear feedback law in momentum and its derivatives for a system of rigid bodies.

  17. Extraction of user's navigation commands from upper body force interaction in walker assisted gait.

    PubMed

    Frizera Neto, Anselmo; Gallego, Juan A; Rocon, Eduardo; Pons, José L; Ceres, Ramón

    2010-08-05

    The advances in technology make possible the incorporation of sensors and actuators in rollators, building safer robots and extending the use of walkers to a more diverse population. This paper presents a new method for the extraction of navigation related components from upper-body force interaction data in walker assisted gait. A filtering architecture is designed to cancel: (i) the high-frequency noise caused by vibrations on the walker's structure due to irregularities on the terrain or walker's wheels and (ii) the cadence related force components caused by user's trunk oscillations during gait. As a result, a third component related to user's navigation commands is distinguished. For the cancelation of high-frequency noise, a Benedict-Bordner g-h filter was designed presenting very low values for Kinematic Tracking Error ((2.035 +/- 0.358).10(-2) kgf) and delay ((1.897 +/- 0.3697).10(1)ms). A Fourier Linear Combiner filtering architecture was implemented for the adaptive attenuation of about 80% of the cadence related components' energy from force data. This was done without compromising the information contained in the frequencies close to such notch filters. The presented methodology offers an effective cancelation of the undesired components from force data, allowing the system to extract in real-time voluntary user's navigation commands. Based on this real-time identification of voluntary user's commands, a classical approach to the control architecture of the robotic walker is being developed, in order to obtain stable and safe user assisted locomotion.

  18. Final report for LDRD project 11-0783 : directed robots for increased military manpower effectiveness.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrer, Brandon Robinson; Rothganger, Fredrick H.; Wagner, John S.

    The purpose of this LDRD is to develop technology allowing warfighters to provide high-level commands to their unmanned assets, freeing them to command a group of them or commit the bulk of their attention elsewhere. To this end, a brain-emulating cognition and control architecture (BECCA) was developed, incorporating novel and uniquely capable feature creation and reinforcement learning algorithms. BECCA was demonstrated on both a mobile manipulator platform and on a seven degree of freedom serial link robot arm. Existing military ground robots are almost universally teleoperated and occupy the complete attention of an operator. They may remove a soldier frommore » harm's way, but they do not necessarily reduce manpower requirements. Current research efforts to solve the problem of autonomous operation in an unstructured, dynamic environment fall short of the desired performance. In order to increase the effectiveness of unmanned vehicle (UV) operators, we proposed to develop robots that can be 'directed' rather than remote-controlled. They are instructed and trained by human operators, rather than driven. The technical approach is modeled closely on psychological and neuroscientific models of human learning. Two Sandia-developed models are utilized in this effort: the Sandia Cognitive Framework (SCF), a cognitive psychology-based model of human processes, and BECCA, a psychophysical-based model of learning, motor control, and conceptualization. Together, these models span the functional space from perceptuo-motor abilities, to high-level motivational and attentional processes.« less

  19. ARC-2006-ACD06-0113-012

    NASA Image and Video Library

    2006-06-28

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  20. ARC-2006-ACD06-0113-015

    NASA Image and Video Library

    2006-06-28

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  1. ARC-2006-ACD06-0113-014

    NASA Image and Video Library

    2006-07-05

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  2. ARC-2006-ACD06-0113-013

    NASA Image and Video Library

    2006-06-28

    Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. Spaceward Bound Program in Atacama Desert; shown here is a realtime webcast from Yungay, Chile vis satellite involving NASA Scientists and seven NASA Explorer school teachers. On the Ames end we find the Girl Scouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. On the Ames end we find the Girl Csouts Space cookines robotic team. The robot nicknamed Zoe is looking for life in extreme environments in preparation for what might be encounter on Mars. see full text on the NASA-Ames News - Research # 04-91AR Center Director works with 'SpaceCookie' sending commands to Zoe.

  3. This "Ethical Trap" Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.

    PubMed

    Miller, Keith W; Wolf, Marty J; Grodzinsky, Frances

    2017-04-01

    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading.

  4. From the laboratory to the soldier: providing tactical behaviors for Army robots

    NASA Astrophysics Data System (ADS)

    Knichel, David G.; Bruemmer, David J.

    2008-04-01

    The Army Future Combat System (FCS) Operational Requirement Document has identified a number of advanced robot tactical behavior requirements to enable the Future Brigade Combat Team (FBCT). The FBCT advanced tactical behaviors include Sentinel Behavior, Obstacle Avoidance Behavior, and Scaled Levels of Human-Machine control Behavior. The U.S. Army Training and Doctrine Command, (TRADOC) Maneuver Support Center (MANSCEN) has also documented a number of robotic behavior requirements for the Army non FCS forces such as the Infantry Brigade Combat Team (IBCT), Stryker Brigade Combat Team (SBCT), and Heavy Brigade Combat Team (HBCT). The general categories of useful robot tactical behaviors include Ground/Air Mobility behaviors, Tactical Mission behaviors, Manned-Unmanned Teaming behaviors, and Soldier-Robot Interface behaviors. Many DoD research and development centers are achieving the necessary components necessary for artificial tactical behaviors for ground and air robots to include the Army Research Laboratory (ARL), U.S. Army Research, Development and Engineering Command (RDECOM), Space and Naval Warfare (SPAWAR) Systems Center, US Army Tank-Automotive Research, Development and Engineering Center (TARDEC) and non DoD labs such as Department of Energy (DOL). With the support of the Joint Ground Robotics Enterprise (JGRE) through DoD and non DoD labs the Army Maneuver Support Center has recently concluded successful field trails of ground and air robots with specialized tactical behaviors and sensors to enable semi autonomous detection, reporting, and marking of explosive hazards to include Improvised Explosive Devices (IED) and landmines. A specific goal of this effort was to assess how collaborative behaviors for multiple unmanned air and ground vehicles can reduce risks to Soldiers and increase efficiency for on and off route explosive hazard detection, reporting, and marking. This paper discusses experimental results achieved with a robotic countermine system that utilizes autonomous behaviors and a mixed-initiative control scheme to address the challenges of detecting and marking buried landmines. Emerging requirements for robotic countermine operations are outlined as are the technologies developed under this effort to address them. A first experiment shows that the resulting system was able to find and mark landmines with a very low level of human involvement. In addition, the data indicates that the robotic system is able to decrease the time to find mines and increase the detection accuracy and reliability. Finally, the paper presents current efforts to incorporate new countermine sensors and port the resulting behaviors to two fielded military systems for rigorous assessing.

  5. Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)

    DTIC Science & Technology

    2000-01-01

    real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .

  6. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  7. Incorporation of perception-based information in robot learning using fuzzy reinforcement learning agents

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo

    2002-04-01

    Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.

  8. Robotics technology discipline

    NASA Technical Reports Server (NTRS)

    Montemerlo, Melvin D.

    1990-01-01

    Viewgraphs on robotics technology discipline for Space Station Freedom are presented. Topics covered include: mechanisms; sensors; systems engineering processes for integrated robotics; man/machine cooperative control; 3D-real-time machine perception; multiple arm redundancy control; manipulator control from a movable base; multi-agent reasoning; and surfacing evolution technologies.

  9. STS-106 Crew Activities Report/Flight Day 04 Highlights

    NASA Technical Reports Server (NTRS)

    2000-01-01

    On this fourth day of the STS-106 Atlantis mission, the flight crew, Commander Commander Terrence W. Wilcutt, Pilot Scott D. Altman, and Mission Specialists Daniel C. Burbank, Edward T. Lu, Richard A. Mastracchio, Yuri Ivanovich Malenchenko, and Boris V. Morukov are seen preparing for the scheduled space walk. Lu and Malenchenko are seen coming through the hatch of the International Space Station (ISS). Also shown are Lu and Malenchenko attaching a magnetometer and boom to Zvezda. Mastracchio operates the robot arm moving the extravehicular activity (EVA) crew outside of the ISS.

  10. Sandia National Laboratories proof-of-concept robotic security vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, J.J.; Jones, D.P.; Klarer, P.R.

    1989-01-01

    Several years ago Sandia National Laboratories developed a prototype interior robot that could navigate autonomously inside a large complex building to air and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modified andmore » integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities. 2 refs., 3 figs.« less

  11. Humanoid Robotics: Real-Time Object Oriented Programming

    NASA Technical Reports Server (NTRS)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  12. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  13. Augmented reality and haptic interfaces for robot-assisted surgery.

    PubMed

    Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N

    2012-03-01

    Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction

    PubMed Central

    Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A.

    2017-01-01

    Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation. PMID:28749407

  15. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079355 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (foreground), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  16. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079129 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (left), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  17. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079910 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (left), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  18. SPHERES Vertigo

    NASA Image and Video Library

    2014-07-25

    ISS040-E-079332 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (foreground), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.

  19. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  20. Plan recognition and generalization in command languages with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Yared, Wael I.; Sheridan, Thomas B.

    1991-01-01

    A method for pragmatic inference as a necessary accompaniment to command languages is proposed. The approach taken focuses on the modeling and recognition of the human operator's intent, which relates sequences of domain actions ('plans') to changes in some model of the task environment. The salient feature of this module is that it captures some of the physical and linguistic contextual aspects of an instruction. This provides a basis for generalization and reinterpretation of the instruction in different task environments. The theoretical development is founded on previous work in computational linguistics and some recent models in the theory of action and intention. To illustrate these ideas, an experimental command language to a telerobot is implemented. The program consists of three different components: a robot graphic simulation, the command language itself, and the domain-independent pragmatic inference module. Examples of task instruction processes are provided to demonstrate the benefits of this approach.

  1. The Dominant Robot: Threatening Robots Cause Psychological Reactance, Especially When They Have Incongruent Goals

    NASA Astrophysics Data System (ADS)

    Roubroeks, M. A. J.; Ham, J. R. C.; Midden, C. J. H.

    Persuasive technology can take the form of a social agent that persuades people to change behavior or attitudes. However, like any persuasive technology, persuasive social agents might trigger psychological reactance, which can lead to restoration behavior. The current study investigated whether interacting with a persuasive robot can cause psychological reactance. Additionally, we investigated whether goal congruency plays a role in psychological reactance. Participants programmed a washing machine while a robot gave threatening advice. Confirming expectations, participants experienced more psychological reactance when receiving high-threatening advice compared to low-threatening advice. Moreover, when the robot gave high-threatening advice and expressed an incongruent goal, participants reported the highest level of psychological reactance (on an anger measure). Finally, high-threatening advice led to more restoration, and this relationship was partially mediated by psychological reactance. Overall, results imply that under certain circumstances persuasive technology can trigger opposite effects, especially when people have incongruent goal intentions.

  2. Impacts of Advanced Manufacturing Technology on Parametric Estimating

    DTIC Science & Technology

    1989-12-01

    been build ( Blois , p. 65). As firms move up the levels of automation, there is a large capital investment to acquire robots, computer numerically...Affordable Acquisition Approach Study, Executive Summary, Air Force Systems Command, Andrews AFB, Maryland, February 9, 1983. Blois , K.J., "Manufacturing

  3. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    PubMed Central

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  4. Process Algebra Approach for Action Recognition in the Maritime Domain

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    2011-01-01

    The maritime environment poses a number of challenges for autonomous operation of surface boats. Among these challenges are the highly dynamic nature of the environment, the onboard sensing and reasoning requirements for obeying the navigational rules of the road, and the need for robust day/night hazard detection and avoidance. Development of full mission level autonomy entails addressing these challenges, coupled with inference of the tactical and strategic intent of possibly adversarial vehicles in the surrounding environment. This paper introduces PACIFIC (Process Algebra Capture of Intent From Information Content), an onboard system based on formal process algebras that is capable of extracting actions/activities from sensory inputs and reasoning within a mission context to ensure proper responses. PACIFIC is part of the Behavior Engine in CARACaS (Cognitive Architecture for Robotic Agent Command and Sensing), a system that is currently running on a number of U.S. Navy unmanned surface and underwater vehicles. Results from a series of experimental studies that demonstrate the effectiveness of the system are also presented.

  5. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  6. A fault-tolerant intelligent robotic control system

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam Sing

    1993-01-01

    This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.

  7. Development of coffee maker service robot using speech and face recognition systems using POMDP

    NASA Astrophysics Data System (ADS)

    Budiharto, Widodo; Meiliana; Santoso Gunawan, Alexander Agung

    2016-07-01

    There are many development of intelligent service robot in order to interact with user naturally. This purpose can be done by embedding speech and face recognition ability on specific tasks to the robot. In this research, we would like to propose Intelligent Coffee Maker Robot which the speech recognition is based on Indonesian language and powered by statistical dialogue systems. This kind of robot can be used in the office, supermarket or restaurant. In our scenario, robot will recognize user's face and then accept commands from the user to do an action, specifically in making a coffee. Based on our previous work, the accuracy for speech recognition is about 86% and face recognition is about 93% in laboratory experiments. The main problem in here is to know the intention of user about how sweetness of the coffee. The intelligent coffee maker robot should conclude the user intention through conversation under unreliable automatic speech in noisy environment. In this paper, this spoken dialog problem is treated as a partially observable Markov decision process (POMDP). We describe how this formulation establish a promising framework by empirical results. The dialog simulations are presented which demonstrate significant quantitative outcome.

  8. Robotic follower experimentation results: ready for FCS increment I

    NASA Astrophysics Data System (ADS)

    Jaczkowski, Jeffrey J.

    2003-09-01

    Robotics is a fundamental enabling technology required to meet the U.S. Army's vision to be a strategically responsive force capable of domination across the entire spectrum of conflict. The U. S. Army Research, Development and Engineering Command (RDECOM) Tank Automotive Research, Development & Engineering Center (TARDEC), in partnership with the U.S. Army Research Laboratory, is developing a leader-follower capability for Future Combat Systems. The Robotic Follower Advanced Technology Demonstration (ATD) utilizes a manned leader to provide a highlevel proofing of the follower's path, which operates with minimal user intervention. This paper will give a programmatic overview and discuss both the technical approach and operational experimentation results obtained during testing conducted at Ft. Bliss, New Mexico in February-March 2003.

  9. Teleoperated position control of a PUMA robot

    NASA Technical Reports Server (NTRS)

    Austin, Edmund; Fong, Chung P.

    1987-01-01

    A laboratory distributed computer control teleoperator system is developed to support NASA's future space telerobotic operation. This teleoperator system uses a universal force-reflecting hand controller in the local iste as the operator's input device. In the remote site, a PUMA controller recieves the Cartesian position commands and implements PID control laws to position the PUMA robot. The local site uses two microprocessors while the remote site uses three. The processors communicate with each other through shared memory. The PUMA robot controller was interfaced through custom made electronics to bypass VAL. The development status of this teleoperator system is reported. The execution time of each processor is analyzed, and the overall system throughput rate is reported. Methods to improve the efficiency and performance are discussed.

  10. Emergency response nurse scheduling with medical support robot by multi-agent and fuzzy technique.

    PubMed

    Kono, Shinya; Kitamura, Akira

    2015-08-01

    In this paper, a new co-operative re-scheduling method corresponding the medical support tasks that the time of occurrence can not be predicted is described, assuming robot can co-operate medical activities with the nurse. Here, Multi-Agent-System (MAS) is used for the co-operative re-scheduling, in which Fuzzy-Contract-Net (FCN) is applied to the robots task assignment for the emergency tasks. As the simulation results, it is confirmed that the re-scheduling results by the proposed method can keep the patients satisfaction and decrease the work load of the nurse.

  11. IVA the robot: Design guidelines and lessons learned from the first space station laboratory manipulation system

    NASA Technical Reports Server (NTRS)

    Konkel, Carl R.; Powers, Allen K.; Dewitt, J. Russell

    1991-01-01

    The first interactive Space Station Freedom (SSF) lab robot exhibit was installed at the Space and Rocket Center in Huntsville, AL, and has been running daily since. IntraVehicular Activity (IVA) the robot is mounted in a full scale U.S. Lab (USL) mockup to educate the public on possible automation and robotic applications aboard the SSF. Responding to audio and video instructions at the Command Console, exhibit patrons may prompt IVA to perform a housekeeping task or give a speaking tour of the module. Other exemplary space station tasks are simulated and the public can even challenge IVA to a game of tic tac toe. In anticipation of such a system being built for the Space Station, a discussion is provided of the approach taken, along with suggestions for applicability to the Space Station Environment.

  12. Direct model reference adaptive control of robotic arms

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-01-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  13. Space environments and their effects on space automation and robotics

    NASA Technical Reports Server (NTRS)

    Garrett, Henry B.

    1990-01-01

    Automated and robotic systems will be exposed to a variety of environmental anomalies as a result of adverse interactions with the space environment. As an example, the coupling of electrical transients into control systems, due to EMI from plasma interactions and solar array arcing, may cause spurious commands that could be difficult to detect and correct in time to prevent damage during critical operations. Spacecraft glow and space debris could introduce false imaging information into optical sensor systems. The presentation provides a brief overview of the primary environments (plasma, neutral atmosphere, magnetic and electric fields, and solid particulates) that cause such adverse interactions. The descriptions, while brief, are intended to provide a basis for the other papers presented at this conference which detail the key interactions with automated and robotic systems. Given the growing complexity and sensitivity of automated and robotic space systems, an understanding of adverse space environments will be crucial to mitigating their effects.

  14. Learning classifier systems for single and multiple mobile robots in unstructured environments

    NASA Astrophysics Data System (ADS)

    Bay, John S.

    1995-12-01

    The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.

  15. Object-based task-level control: A hierarchical control architecture for remote operation of space robots

    NASA Technical Reports Server (NTRS)

    Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.

    1994-01-01

    Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.

  16. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  17. Dimensions of complexity in learning from interactive instruction. [for robotic systems deployed in space

    NASA Technical Reports Server (NTRS)

    Huffman, Scott B.; Laird, John E.

    1992-01-01

    Robot systems deployed in space must exhibit flexibility. In particular, an intelligent robotic agent should not have to be reprogrammed for each of the various tasks it may face during the course of its lifetime. However, pre-programming knowledge for all of the possible tasks that may be needed is extremely difficult. Therefore, a powerful notion is that of an instructible agent, one which is able to receive task-level instructions and advice from a human advisor. An agent must do more than simply memorize the instructions it is given (this would amount to programming). Rather, after mapping instructions into task constructs that it can reason with, it must determine each instruction's proper scope of applicability. In this paper, we will examine the characteristics of instruction, and the characteristics of agents, that affect learning from instruction. We find that in addition to a myriad of linguistic concerns, both the situatedness of the instructions (their placement within the ongoing execution of tasks) and the prior domain knowledge of the agent have an impact on what can be learned.

  18. Adaptation mechanism of interlimb coordination in human split-belt treadmill walking through learning of foot contact timing: a robotics study

    PubMed Central

    Fujiki, Soichiro; Aoi, Shinya; Funato, Tetsuro; Tomita, Nozomi; Senda, Kei; Tsuchiya, Kazuo

    2015-01-01

    Human walking behaviour adaptation strategies have previously been examined using split-belt treadmills, which have two parallel independently controlled belts. In such human split-belt treadmill walking, two types of adaptations have been identified: early and late. Early-type adaptations appear as rapid changes in interlimb and intralimb coordination activities when the belt speeds of the treadmill change between tied (same speed for both belts) and split-belt (different speeds for each belt) configurations. By contrast, late-type adaptations occur after the early-type adaptations as a gradual change and only involve interlimb coordination. Furthermore, interlimb coordination shows after-effects that are related to these adaptations. It has been suggested that these adaptations are governed primarily by the spinal cord and cerebellum, but the underlying mechanism remains unclear. Because various physiological findings suggest that foot contact timing is crucial to adaptive locomotion, this paper reports on the development of a two-layered control model for walking composed of spinal and cerebellar models, and on its use as the focus of our control model. The spinal model generates rhythmic motor commands using an oscillator network based on a central pattern generator and modulates the commands formulated in immediate response to foot contact, while the cerebellar model modifies motor commands through learning based on error information related to differences between the predicted and actual foot contact timings of each leg. We investigated adaptive behaviour and its mechanism by split-belt treadmill walking experiments using both computer simulations and an experimental bipedal robot. Our results showed that the robot exhibited rapid changes in interlimb and intralimb coordination that were similar to the early-type adaptations observed in humans. In addition, despite the lack of direct interlimb coordination control, gradual changes and after-effects in the interlimb coordination appeared in a manner that was similar to the late-type adaptations and after-effects observed in humans. The adaptation results of the robot were then evaluated in comparison with human split-belt treadmill walking, and the adaptation mechanism was clarified from a dynamic viewpoint. PMID:26289658

  19. Adaptation mechanism of interlimb coordination in human split-belt treadmill walking through learning of foot contact timing: a robotics study.

    PubMed

    Fujiki, Soichiro; Aoi, Shinya; Funato, Tetsuro; Tomita, Nozomi; Senda, Kei; Tsuchiya, Kazuo

    2015-09-06

    Human walking behaviour adaptation strategies have previously been examined using split-belt treadmills, which have two parallel independently controlled belts. In such human split-belt treadmill walking, two types of adaptations have been identified: early and late. Early-type adaptations appear as rapid changes in interlimb and intralimb coordination activities when the belt speeds of the treadmill change between tied (same speed for both belts) and split-belt (different speeds for each belt) configurations. By contrast, late-type adaptations occur after the early-type adaptations as a gradual change and only involve interlimb coordination. Furthermore, interlimb coordination shows after-effects that are related to these adaptations. It has been suggested that these adaptations are governed primarily by the spinal cord and cerebellum, but the underlying mechanism remains unclear. Because various physiological findings suggest that foot contact timing is crucial to adaptive locomotion, this paper reports on the development of a two-layered control model for walking composed of spinal and cerebellar models, and on its use as the focus of our control model. The spinal model generates rhythmic motor commands using an oscillator network based on a central pattern generator and modulates the commands formulated in immediate response to foot contact, while the cerebellar model modifies motor commands through learning based on error information related to differences between the predicted and actual foot contact timings of each leg. We investigated adaptive behaviour and its mechanism by split-belt treadmill walking experiments using both computer simulations and an experimental bipedal robot. Our results showed that the robot exhibited rapid changes in interlimb and intralimb coordination that were similar to the early-type adaptations observed in humans. In addition, despite the lack of direct interlimb coordination control, gradual changes and after-effects in the interlimb coordination appeared in a manner that was similar to the late-type adaptations and after-effects observed in humans. The adaptation results of the robot were then evaluated in comparison with human split-belt treadmill walking, and the adaptation mechanism was clarified from a dynamic viewpoint. © 2015 The Authors.

  20. Sensor supervision and multiagent commanding by means of projective virtual reality

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    1998-10-01

    When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.

  1. Engineering Sensorial Delay to Control Phototaxis and Emergent Collective Behaviors

    NASA Astrophysics Data System (ADS)

    Mijalkov, Mite; McDaniel, Austin; Wehr, Jan; Volpe, Giovanni

    2016-01-01

    Collective motions emerging from the interaction of autonomous mobile individuals play a key role in many phenomena, from the growth of bacterial colonies to the coordination of robotic swarms. For these collective behaviors to take hold, the individuals must be able to emit, sense, and react to signals. When dealing with simple organisms and robots, these signals are necessarily very elementary; e.g., a cell might signal its presence by releasing chemicals and a robot by shining light. An additional challenge arises because the motion of the individuals is often noisy; e.g., the orientation of cells can be altered by Brownian motion and that of robots by an uneven terrain. Therefore, the emphasis is on achieving complex and tunable behaviors from simple autonomous agents communicating with each other in robust ways. Here, we show that the delay between sensing and reacting to a signal can determine the individual and collective long-term behavior of autonomous agents whose motion is intrinsically noisy. We experimentally demonstrate that the collective behavior of a group of phototactic robots capable of emitting a radially decaying light field can be tuned from segregation to aggregation and clustering by controlling the delay with which they change their propulsion speed in response to the light intensity they measure. We track this transition to the underlying dynamics of this system, in particular, to the ratio between the robots' sensorial delay time and the characteristic time of the robots' random reorientation. Supported by numerics, we discuss how the same mechanism can be applied to control active agents, e.g., airborne drones, moving in a three-dimensional space. Given the simplicity of this mechanism, the engineering of sensorial delay provides a potentially powerful tool to engineer and dynamically tune the behavior of large ensembles of autonomous mobile agents; furthermore, this mechanism might already be at work within living organisms such as chemotactic cells.

  2. Robotically assisted small animal MRI-guided mouse biopsy

    NASA Astrophysics Data System (ADS)

    Wilson, Emmanuel; Chiodo, Chris; Wong, Kenneth H.; Fricke, Stanley; Jung, Mira; Cleary, Kevin

    2010-02-01

    Small mammals, namely mice and rats, play an important role in biomedical research. Imaging, in conjunction with accurate therapeutic agent delivery, has tremendous value in small animal research since it enables serial, non-destructive testing of animals and facilitates the study of biomarkers of disease progression. The small size of organs in mice lends some difficulty to accurate biopsies and therapeutic agent delivery. Image guidance with the use of robotic devices should enable more accurate and repeatable targeting for biopsies and delivery of therapeutic agents, as well as the ability to acquire tissue from a pre-specified location based on image anatomy. This paper presents our work in integrating a robotic needle guide device, specialized stereotaxic mouse holder, and magnetic resonance imaging, with a long-term goal of performing accurate and repeatable targeting in anesthetized mice studies.

  3. Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes

    DTIC Science & Technology

    2016-06-01

    making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in

  4. The Unified Behavior Framework for the Simulation of Autonomous Agents

    DTIC Science & Technology

    2015-03-01

    1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive

  5. A bio-inspired swarm robot coordination algorithm for multiple target searching

    NASA Astrophysics Data System (ADS)

    Meng, Yan; Gan, Jing; Desai, Sachi

    2008-04-01

    The coordination of a multi-robot system searching for multi targets is challenging under dynamic environment since the multi-robot system demands group coherence (agents need to have the incentive to work together faithfully) and group competence (agents need to know how to work together well). In our previous proposed bio-inspired coordination method, Local Interaction through Virtual Stigmergy (LIVS), one problem is the considerable randomness of the robot movement during coordination, which may lead to more power consumption and longer searching time. To address these issues, an adaptive LIVS (ALIVS) method is proposed in this paper, which not only considers the travel cost and target weight, but also predicting the target/robot ratio and potential robot redundancy with respect to the detected targets. Furthermore, a dynamic weight adjustment is also applied to improve the searching performance. This new method a truly distributed method where each robot makes its own decision based on its local sensing information and the information from its neighbors. Basically, each robot only communicates with its neighbors through a virtual stigmergy mechanism and makes its local movement decision based on a Particle Swarm Optimization (PSO) algorithm. The proposed ALIVS algorithm has been implemented on the embodied robot simulator, Player/Stage, in a searching target. The simulation results demonstrate the efficiency and robustness in a power-efficient manner with the real-world constraints.

  6. Robotics control using isolated word recognition of voice input

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.

    1977-01-01

    A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.

  7. Human Cognitive Processes in Command and Control Planning. 3. Determining Basic Processes Involved in Planning in Time and Space (Cognitieve Processen in Command and Control Planning. 3. Basisprocessen in Planning in Tijd en Ruimte)

    DTIC Science & Technology

    1991-08-07

    spatidle componenten bevat."De studie had twee do6elen: hei ontwikkelen, van een methode voor bet bepalen van . de cognitieve processen, die met...planning samenhangen en bet ontwikkelen van een model voor efficidnte planning voor de taak gebruikt in deze studie. Twee planners gaven verbale en...grafische protocolfen terwijl ze een planning maakten voor de meest effi-cidnte weg voor winkel-robot’s orn goederen op te halen in een winkel. Voor twaalf

  8. 78 FR 1848 - Intent To Grant an Exclusive License of a U.S. Government-Owned Invention

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-09

    ..., entitled ``Method of Diagnosing of Exposure to Toxic Agents by Measuring Distinct Pattern in the Levels of..., Inc., with its principal place of business at 4938 Hampden Lane 319, Bethesda, Maryland 20814-2914. ADDRESSES: Commander, U.S. Army Medical Research and Materiel Command, ATTN: Command Judge Advocate, MCMR-JA...

  9. Combining a hybrid robotic system with a bain-machine interface for the rehabilitation of reaching movements: A case study with a stroke patient.

    PubMed

    Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis

    2016-08-01

    Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.

  10. iss050e059620

    NASA Image and Video Library

    2017-03-24

    iss050e059620 (03/24/2017) --- Expedition 50 Commander Shane Kimbrough of NASA is seen floating into the Quest airlock at the conclusion of a spacewalk. Kimbrough and Flight Engineer Thomas Pesquet of ESA (European Space Agency) conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  11. iss050e059613

    NASA Image and Video Library

    2017-03-24

    iss050e059613 (03/24/2017) --- Expedition 50 Commander Shane Kimbrough of NASA is seen floating into the Quest airlock at the conclusion of a spacewalk. Kimbrough and Flight Engineer Thomas Pesquet of ESA (European Space Agency) conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  12. iss050e059576

    NASA Image and Video Library

    2017-03-24

    iss050e059576 (03/24/2017) --- Russian cosmonaut Oleg Novitskiy (middle) poses with Expedition 50 Commander Shane Kimbrough of NASA (left) and Flight Engineer Thomas Pesquet of ESA (European Space Agency) (right) prior to their spacewalk. The pair conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  13. iss050e059579

    NASA Image and Video Library

    2017-03-24

    iss050e059579 (03/24/2017) --- NASA astronaut Peggy Whitson (middle) poses with Expedition 50 Commander Shane Kimbrough of NASA (left) and Flight Engineer Thomas Pesquet of ESA (European Space Agency) (right) prior to their spacewalk. The pair conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  14. iss050e059752

    NASA Image and Video Library

    2017-03-24

    iss050e059752 (03/24/2017) --- Flight Engineer Thomas Pesquet of ESA (European Space Agency) is seen floating outside the International Space Station during a spacewalk. Pesquet and Expedition 50 Commander Shane Kimbrough of NASA conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.

  15. Robot Control Through Brain Computer Interface For Patterns Generation

    NASA Astrophysics Data System (ADS)

    Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.

    2011-09-01

    A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.

  16. Voice Controlled Wheelchair

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Michael Condon, a quadraplegic from Pasadena, California, demonstrates the NASA-developed voice-controlled wheelchair and its manipulator, which can pick up packages, open doors, turn a TV knob, and perform a variety of other functions. A possible boon to paralyzed and other severely handicapped persons, the chair-manipulator system responds to 35 one-word voice commands, such as "go," "stop," "up," "down," "right," "left," "forward," "backward." The heart of the system is a voice-command analyzer which utilizes a minicomputer. Commands are taught I to the computer by the patient's repeating them a number of times; thereafter the analyzer recognizes commands only in the patient's particular speech pattern. The computer translates commands into electrical signals which activate appropriate motors and cause the desired motion of chair or manipulator. Based on teleoperator and robot technology for space-related programs, the voice-controlled system was developed by Jet Propulsion Laboratory under the joint sponsorship of NASA and the Veterans Administration. The wheelchair-manipulator has been tested at Rancho Los Amigos Hospital, Downey, California, and is being evaluated at the VA Prosthetics Center in New York City.

  17. A natural-language interface to a mobile robot

    NASA Technical Reports Server (NTRS)

    Michalowski, S.; Crangle, C.; Liang, L.

    1987-01-01

    The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot.

  18. Development and Evaluation of Sensor Concepts for Ageless Aerospace Vehicles: Report 6 - Development and Demonstration of a Self-Organizing Diagnostic System for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail; hide

    2010-01-01

    This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.

  19. Triggering social interactions: chimpanzees respond to imitation by a humanoid robot and request responses from it.

    PubMed

    Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A

    2014-05-01

    Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.

  20. Distance-Based Behaviors for Low-Complexity Control in Multiagent Robotics

    NASA Astrophysics Data System (ADS)

    Pierpaoli, Pietro

    Several biological examples show that living organisms cooperate to collectively accomplish tasks impossible for single individuals. More importantly, this coordination is often achieved with a very limited set of information. Inspired by these observations, research on autonomous systems has focused on the development of distributed control techniques for control and guidance of groups of autonomous mobile agents, or robots. From an engineering perspective, when coordination and cooperation is sought in large ensembles of robotic vehicles, a reduction in hardware and algorithms' complexity becomes mandatory from the very early stages of the project design. The research for solutions capable of lowering power consumption, cost and increasing reliability are thus worth investigating. In this work, we studied low-complexity techniques to achieve cohesion and control on swarms of autonomous robots. Starting from an inspiring example with two-agents, we introduced effects of neighbors' relative positions on control of an autonomous agent. The extension of this intuition addressed the control of large ensembles of autonomous vehicles, and was applied in the form of a herding-like technique. To this end, a low-complexity distance-based aggregation protocol was defined. We first showed that our protocol produced a cohesion aggregation among the agent while avoiding inter-agent collisions. Then, a feedback leader-follower architecture was introduced for the control of the swarm. We also described how proximity measures and probability of collisions with neighbors can also be used as source of information in highly populated environments.

  1. Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer

    PubMed Central

    Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan

    2012-01-01

    Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853

  2. Multi-agent autonomous system and method

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor); Dohm, James (Inventor); Tarbell, Mark A. (Inventor)

    2010-01-01

    A method of controlling a plurality of crafts in an operational area includes providing a command system, a first craft in the operational area coupled to the command system, and a second craft in the operational area coupled to the command system. The method further includes determining a first desired destination and a first trajectory to the first desired destination, sending a first command from the command system to the first craft to move a first distance along the first trajectory, and moving the first craft according to the first command. A second desired destination and a second trajectory to the second desired destination are determined and a second command is sent from the command system to the second craft to move a second distance along the second trajectory.

  3. STS-111 Onboard Photo of Endeavour Docking With PMA-2

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The STS-111 mission, the 14th Shuttle mission to visit the International Space Station (ISS), was launched on June 5, 2002 aboard the Space Shuttle Orbiter Endeavour. On board were the STS-111 and Expedition Five crew members. Astronauts Kerneth D. Cockrell, commander; Paul S. Lockhart, pilot, and mission specialists Franklin R. Chang-Diaz and Philippe Perrin were the STS-111 crew members. Expedition Five crew members included Cosmonaut Valeri G. Korzun, commander, Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. Three space walks enabled the STS-111 crew to accomplish mission objectives: The delivery and installation of the Mobile Remote Servicer Base System (MBS), an important part of the Station's Mobile Servicing System that allows the robotic arm to travel the length of the Station, which is necessary for future construction tasks; the replacement of a wrist roll joint on the Station's robotic arm; and the task of unloading supplies and science experiments from the Leonardo multipurpose Logistics Module, which made its third trip to the orbital outpost. In this photograph, the Space Shuttle Endeavour, back dropped by the blackness of space, is docked to the pressurized Mating Adapter (PMA-2) at the forward end of the Destiny Laboratory on the ISS. Endeavour's robotic arm is in full view as it is stretched out with the S0 (S-zero) Truss at its end.

  4. Multimodal interaction for human-robot teams

    NASA Astrophysics Data System (ADS)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  5. International Space Station (ISS)

    NASA Image and Video Library

    2002-06-09

    The STS-111 mission, the 14th Shuttle mission to visit the International Space Station (ISS), was launched on June 5, 2002 aboard the Space Shuttle Orbiter Endeavour. On board were the STS-111 and Expedition Five crew members. Astronauts Kerneth D. Cockrell, commander; Paul S. Lockhart, pilot, and mission specialists Franklin R. Chang-Diaz and Philippe Perrin were the STS-111 crew members. Expedition Five crew members included Cosmonaut Valeri G. Korzun, commander, Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. Three space walks enabled the STS-111 crew to accomplish the delivery and installation of the Mobile Remote Servicer Base System (MBS), an important part of the Station's Mobile Servicing System that allows the robotic arm to travel the length of the Station, which is necessary for future construction tasks; the replacement of a wrist roll joint on the Station's robotic arm; and the task of unloading supplies and science experiments from the Leonardo multipurpose Logistics Module, which made its third trip to the orbital outpost. In this photograph, the Space Shuttle Endeavour, back dropped by the blackness of space, is docked to the pressurized Mating Adapter (PMA-2) at the forward end of the Destiny Laboratory on the ISS. A portion of the Canadarm2 is visible on the right and Endeavour's robotic arm is in full view as it is stretched out with the S0 (S-zero) Truss at its end.

  6. International Space Station (ISS)

    NASA Image and Video Library

    2002-06-09

    The STS-111 mission, the 14th Shuttle mission to visit the International Space Station (ISS), was launched on June 5, 2002 aboard the Space Shuttle Orbiter Endeavour. On board were the STS-111 and Expedition Five crew members. Astronauts Kerneth D. Cockrell, commander; Paul S. Lockhart, pilot, and mission specialists Franklin R. Chang-Diaz and Philippe Perrin were the STS-111 crew members. Expedition Five crew members included Cosmonaut Valeri G. Korzun, commander, Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. Three space walks enabled the STS-111 crew to accomplish mission objectives: The delivery and installation of the Mobile Remote Servicer Base System (MBS), an important part of the Station's Mobile Servicing System that allows the robotic arm to travel the length of the Station, which is necessary for future construction tasks; the replacement of a wrist roll joint on the Station's robotic arm; and the task of unloading supplies and science experiments from the Leonardo multipurpose Logistics Module, which made its third trip to the orbital outpost. In this photograph, the Space Shuttle Endeavour, back dropped by the blackness of space, is docked to the pressurized Mating Adapter (PMA-2) at the forward end of the Destiny Laboratory on the ISS. Endeavour's robotic arm is in full view as it is stretched out with the S0 (S-zero) Truss at its end.

  7. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.

    PubMed

    Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W

    2014-04-01

    This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.

  8. Design and control of a macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff

    1993-01-01

    Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.

  9. Technology transfer: Imaging tracker to robotic controller

    NASA Technical Reports Server (NTRS)

    Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don

    1988-01-01

    The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.

  10. The AGINAO Self-Programming Engine

    NASA Astrophysics Data System (ADS)

    Skaba, Wojciech

    2013-01-01

    The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.

  11. Design of a Teleoperated Needle Steering System for MRI-guided Prostate Interventions

    PubMed Central

    Seifabadi, Reza; Iordachita, Iulian; Fichtinger, Gabor

    2013-01-01

    Accurate needle placement plays a key role in success of prostate biopsy and brachytherapy. During percutaneous interventions, the prostate gland rotates and deforms which may cause significant target displacement. In these cases straight needle trajectory is not sufficient for precise targeting. Although needle spinning and fast insertion may be helpful, they do not entirely resolve the issue. We propose robot-assisted bevel-tip needle steering under MRI guidance as a potential solution to compensate for the target displacement. MRI is chosen for its superior soft tissue contrast in prostate imaging. Due to the confined workspace of the MRI scanner and the requirement for the clinician to be present inside the MRI room during the procedure, we designed a MRI-compatible 2-DOF haptic device to command the needle steering slave robot which operates inside the scanner. The needle steering slave robot was designed to be integrated with a previously developed pneumatically actuated transperineal robot for MRI-guided prostate needle placement. We describe design challenges and present the conceptual design of the master and slave robots and the associated controller. PMID:24649480

  12. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  13. Practical robotic self-awareness and self-knowledge

    NASA Astrophysics Data System (ADS)

    Gage, Douglas W.

    2011-05-01

    The functional software components of an autonomous robotic system express behavior via commands to its actuators, based on processed inputs from its sensors; we propose an additional set of "cognitive" capabilities for robotic systems of all types, based on the comprehensive logging of all available data, including sensor inputs, behavioral states, and outputs sent to actuators. A robot should maintain a "sense" of its own (piecewise) continuous existence through time and space; it should in some sense "get a life," providing a level of self-awareness and self-knowledge. Self-awareness includes the ability to survive and work through unexpected power glitches while executing a task or mission. Selfknowledge includes an extensive world model including a model of self and the purpose context in which it is operating (deontics). Our system must support proactive self-test, monitoring, and calibration, and maintain a "personal" health/repair history, supporting system test and evaluation by continuously measuring performance throughout the entire product lifecycle. It will include episodic memory, and a system "lifelog," and will also participate in multiple modes of Human Robotic interaction (HRI).

  14. An Intelligent Agent Approach for Teaching Neural Networks Using LEGO[R] Handy Board Robots

    ERIC Educational Resources Information Center

    Imberman, Susan P.

    2004-01-01

    In this article we describe a project for an undergraduate artificial intelligence class. The project teaches neural networks using LEGO[R] handy board robots. Students construct robots with two motors and two photosensors. Photosensors provide readings that act as inputs for the neural network. Output values power the motors and maintain the…

  15. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    ERIC Educational Resources Information Center

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  16. Remote Control and Children's Understanding of Robots

    ERIC Educational Resources Information Center

    Somanader, Mark C.; Saylor, Megan M.; Levin, Daniel T.

    2011-01-01

    Children use goal-directed motion to classify agents as living things from early in infancy. In the current study, we asked whether preschoolers are flexible in their application of this criterion by introducing them to robots that engaged in goal-directed motion. In one case the robot appeared to move fully autonomously, and in the other case it…

  17. Dogs Identify Agents in Third-Party Interactions on the Basis of the Observed Degree of Contingency.

    PubMed

    Tauzin, Tibor; Kovács, Krisztina; Topál, József

    2016-08-01

    To investigate whether dogs could recognize contingent reactivity as a marker of agents' interaction, we performed an experiment in which dogs were presented with third-party contingent events. In the perfect-contingency condition, dogs were shown an unfamiliar self-propelled agent (SPA) that performed actions corresponding to audio clips of verbal commands played by a computer. In the high-but-imperfect-contingency condition, the SPA responded to the verbal commands on only two thirds of the trials; in the low-contingency condition, the SPA responded to the commands on only one third of the trials. In the test phase, the SPA approached one of two tennis balls, and then the dog was allowed to choose one of the balls. The proportion of trials on which a dog chose the object indicated by the SPA increased with the degree of contingency: Dogs chose the target object significantly above chance level only in the perfect-contingency condition. This finding suggests that dogs may use the degree of temporal contingency observed in third-party interactions as a cue to identify agents. © The Author(s) 2016.

  18. Naval Air Operations Within the Role of JFACC: Lessons Learned and Future Roles

    DTIC Science & Technology

    1994-02-08

    ramains the principal ewtdiw agent for employing that air power." (Emphasis added.) 7 components informs the JFC and the JFACC of available direct support...an afloat JFACC or command as the JFACC. Chapter II reviews background information concerning joint air operations and defines command and control...direct support of service missions. In practice the JTCB has become the JFC’s agent for enming the ffective application of theater air powec The JFACC

  19. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogunmolu, O; Gans, N; Jiang, S

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less

  20. Distributed environmental control

    NASA Technical Reports Server (NTRS)

    Cleveland, Gary A.

    1992-01-01

    We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).

  1. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  2. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.

  3. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    PubMed

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  4. Design and validation of an intelligent wheelchair towards a clinically-functional outcome.

    PubMed

    Boucher, Patrice; Atrash, Amin; Kelouwani, Sousso; Honoré, Wormser; Nguyen, Hai; Villemure, Julien; Routhier, François; Cohen, Paul; Demers, Louise; Forget, Robert; Pineau, Joelle

    2013-06-17

    Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.

  5. Dark Horizon: Airpower Revolution on a Razors Edge - Part Two of the Nightfall Series

    DTIC Science & Technology

    2015-10-01

    Education and Training Command, Air University, or other agencies or departments of the US govern- ment. This article may be reproduced in whole or in...machine pilot and monitor its performance, a new set of possibilities emerges. Consequently, the almost comical question “If two robotic airplanes

  6. Remote mission specialist - A study in real-time, adaptive planning

    NASA Technical Reports Server (NTRS)

    Rokey, Mark J.

    1990-01-01

    A high-level planning architecture for robotic operations is presented. The remote mission specialist integrates high-level directives with low-level primitives executable by a run-time controller for command of autonomous servicing activities. The planner has been designed to address such issues as adaptive plan generation, real-time performance, and operator intervention.

  7. Centralized Planning for Multiple Exploratory Robots

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Rabideau, Gregg; Chien, Steve; Barrett, Anthony

    2005-01-01

    A computer program automatically generates plans for a group of robotic vehicles (rovers) engaged in geological exploration of terrain. The program rapidly generates multiple command sequences that can be executed simultaneously by the rovers. Starting from a set of high-level goals, the program creates a sequence of commands for each rover while respecting hardware constraints and limitations on resources of each rover and of hardware (e.g., a radio communication terminal) shared by all the rovers. First, a separate model of each rover is loaded into a centralized planning subprogram. The centralized planning software uses the models of the rovers plus an iterative repair algorithm to resolve conflicts posed by demands for resources and by constraints associated with the all the rovers and the shared hardware. During repair, heuristics are used to make planning decisions that will result in solutions that will be better and will be found faster than would otherwise be possible. In particular, techniques from prior solutions of the multiple-traveling- salesmen problem are used as heuristics to generate plans in which the paths taken by the rovers to assigned scientific targets are shorter than they would otherwise be.

  8. Development of a teaching system for an industrial robot using stereo vision

    NASA Astrophysics Data System (ADS)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  9. Solar Thermal Utility-Scale Joint Venture Program (USJVP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANCINI,THOMAS R.

    2001-04-01

    Several years ago Sandia National Laboratories developed a prototype interior robot [1] that could navigate autonomously inside a large complex building to aid and test interior intrusion detection systems. Recently the Department of Energy Office of Safeguards and Security has supported the development of a vehicle that will perform limited security functions autonomously in a structured exterior environment. The goal of the first phase of this project was to demonstrate the feasibility of an exterior robotic vehicle for security applications by using converted interior robot technology, if applicable. An existing teleoperational test bed vehicle with remote driving controls was modifiedmore » and integrated with a newly developed command driving station and navigation system hardware and software to form the Robotic Security Vehicle (RSV) system. The RSV, also called the Sandia Mobile Autonomous Navigator (SANDMAN), has been successfully used to demonstrate that teleoperated security vehicles which can perform limited autonomous functions are viable and have the potential to decrease security manpower requirements and improve system capabilities.« less

  10. Unsolved problems in observational astronomy. II. Focus on rapid response - mining the sky with ``thinking" telescopes

    NASA Astrophysics Data System (ADS)

    Vestrand, W. T.; Theiler, J.; Woznia, P. R.

    2004-10-01

    The existence of rapidly slewing robotic telescopes and fast alert distribution via the Internet is revolutionizing our capability to study the physics of fast astrophysical transients. But the salient challenge that optical time domain surveys must conquer is mining the torrent of data to recognize important transients in a scene full of normal variations. Humans simply do not have the attention span, memory, or reaction time required to recognize fast transients and rapidly respond. Autonomous robotic instrumentation with the ability to extract pertinent information from the data stream in real time will therefore be essential for recognizing transients and commanding rapid follow-up observations while the ephemeral behavior is still present. Here we discuss how the development and integration of three technologies: (1) robotic telescope networks; (2) machine learning; and (3) advanced database technology, can enable the construction of smart robotic telescopes, which we loosely call ``thinking'' telescopes, capable of mining the sky in real time.

  11. Hierarchical Robot Control System and Method for Controlling Select Degrees of Freedom of an Object Using Multiple Manipulators

    NASA Technical Reports Server (NTRS)

    Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor)

    2013-01-01

    A robotic system includes a robot having manipulators for grasping an object using one of a plurality of grasp types during a primary task, and a controller. The controller controls the manipulators during the primary task using a multiple-task control hierarchy, and automatically parameterizes the internal forces of the system for each grasp type in response to an input signal. The primary task is defined at an object-level of control, e.g., using a closed-chain transformation, such that only select degrees of freedom are commanded for the object. A control system for the robotic system has a host machine and algorithm for controlling the manipulators using the above hierarchy. A method for controlling the system includes receiving and processing the input signal using the host machine, including defining the primary task at the object-level of control, e.g., using a closed-chain definition, and parameterizing the internal forces for each of grasp type.

  12. Coordinating teams of autonomous vehicles: an architectural perspective

    NASA Astrophysics Data System (ADS)

    Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo

    2005-05-01

    In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).

  13. Automated constraint checking of spacecraft command sequences

    NASA Astrophysics Data System (ADS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Spitale, Joseph M.; Le, Dang

    1995-01-01

    Robotic spacecraft are controlled by onboard sets of commands called "sequences." Determining that sequences will have the desired effect on the spacecraft can be expensive in terms of both labor and computer coding time, with different particular costs for different types of spacecraft. Specification languages and appropriate user interface to the languages can be used to make the most effective use of engineering validation time. This paper describes one specification and verification environment ("SAVE") designed for validating that command sequences have not violated any flight rules. This SAVE system was subsequently adapted for flight use on the TOPEX/Poseidon spacecraft. The relationship of this work to rule-based artificial intelligence and to other specification techniques is discussed, as well as the issues that arise in the transfer of technology from a research prototype to a full flight system.

  14. Dynamic Routing and Coordination in Multi-Agent Networks

    DTIC Science & Technology

    2016-06-10

    SECURITY CLASSIFICATION OF: Supported by this project, we designed innovative routing, planning and coordination strategies for robotic networks and...tasks partitioned among robots , in what order are they to be performed, and along which deterministic routes or according to which stochastic rules do...individual robots move. The fundamental novelties and our recent breakthroughs supported by this project are manifold: (1) the application 1

  15. Fronto-parietal coding of goal-directed actions performed by artificial agents.

    PubMed

    Kupferberg, Aleksandra; Iacoboni, Marco; Flanagin, Virginia; Huber, Markus; Kasparbauer, Anna; Baumgartner, Thomas; Hasler, Gregor; Schmidt, Florian; Borst, Christoph; Glasauer, Stefan

    2018-03-01

    With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment. © 2017 Wiley Periodicals, Inc.

  16. Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation.

    PubMed

    O'Neill, John; Lu, Jason; Dockter, Rodney; Kowalewski, Timothy

    2018-03-23

    The design and validation of a continuously stretchable and flexible skin sensor for collaborative robotic applications is outlined. The skin consists of a PDMS skin doped with Carbon Nanotubes and the addition of conductive fabric, connected by only five wires to a simple microcontroller. The accuracy is characterized in position as well as force, and the skin is also tested under uniaxial stretch. There are also two examples of practical implementations in collaborative robotic applications. The stationary position estimate has an RMSE of 7.02 mm, and the sensor error stays within 2.5 ± 1.5 mm even under stretch. The skin consistently provides an emergency stop command at only 0.5 N of force and is shown to maintain a collaboration force of 10 N in a collaborative control experiment.

  17. A Robot Hand Testbed Designed for Enhancing Embodiment and Functional Neurorehabilitation of Body Schema in Subjects with Upper Limb Impairment or Loss

    PubMed Central

    Hellman, Randall B.; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I.; Santos, Veronica J.

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation “phantom limb pain” and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech “rubber hand” illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the “BairClaw” presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger–object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden. PMID:25745391

  18. A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss.

    PubMed

    Hellman, Randall B; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I; Santos, Veronica J

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation "phantom limb pain" and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech "rubber hand" illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the "BairClaw" presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden.

  19. A New Technique for Compensating Joint Limits in a Robot Manipulator

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Hickman, Andre; Guo, Ten-Huei

    1996-01-01

    A new robust, optimal, adaptive technique for compensating rate and position limits in the joints of a six degree-of-freedom elbow manipulator is presented. In this new algorithm, the unmet demand as a result of actuator saturation is redistributed among the remaining unsaturated joints. The scheme is used to compensate for inadequate path planning, problems such as joint limiting, joint freezing, or even obstacle avoidance, where a desired position and orientation are not attainable due to an unrealizable joint command. Once a joint encounters a limit, supplemental commands are sent to other joints to best track, according to a selected criterion, the desired trajectory.

  20. A model-based executive for commanding robot teams

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    The paper presents a way to robustly command a system of systems as a single entity. Instead of modeling each component system in isolation and then manually crafting interaction protocols, this approach starts with a model of the collective population as a single system. By compiling the model into separate elements for each component system and utilizing a teamwork model for coordination, it circumvents the complexities of manually crafting robust interaction protocols. The resulting systems are both globally responsive by virtue of a team oriented interaction model and locally responsive by virtue of a distributed approach to model-based fault detection, isolation, and recovery.

  1. When a robot is social: spatial arrangements and multimodal semiotic engagement in the practice of social robotics.

    PubMed

    Alac, Morana; Movellan, Javier; Tanaka, Fumihide

    2011-12-01

    Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.

  2. Seeing Minds in Others – Can Agents with Robotic Appearance Have Human-Like Preferences?

    PubMed Central

    Martini, Molly C.; Gonzalez, Christian A.; Wiese, Eva

    2016-01-01

    Ascribing mental states to non-human agents has been shown to increase their likeability and lead to better joint-task performance in human-robot interaction (HRI). However, it is currently unclear what physical features non-human agents need to possess in order to trigger mind attribution and whether different aspects of having a mind (e.g., feeling pain, being able to move) need different levels of human-likeness before they are readily ascribed to non-human agents. The current study addresses this issue by modeling how increasing the degree of human-like appearance (on a spectrum from mechanistic to humanoid to human) changes the likelihood by which mind is attributed towards non-human agents. We also test whether different internal states (e.g., being hungry, being alive) need different degrees of humanness before they are ascribed to non-human agents. The results suggest that the relationship between physical appearance and the degree to which mind is attributed to non-human agents is best described as a two-linear model with no change in mind attribution on the spectrum from mechanistic to humanoid robot, but a significant increase in mind attribution as soon as human features are included in the image. There seems to be a qualitative difference in the perception of mindful versus mindless agents given that increasing human-like appearance alone does not increase mind attribution until a certain threshold is reached, that is: agents need to be classified as having a mind first before the addition of more human-like features significantly increases the degree to which mind is attributed to that agent. PMID:26745500

  3. Multi-agent robotic systems and applications for satellite missions

    NASA Astrophysics Data System (ADS)

    Nunes, Miguel A.

    A revolution in the space sector is happening. It is expected that in the next decade there will be more satellites launched than in the previous sixty years of space exploration. Major challenges are associated with this growth of space assets such as the autonomy and management of large groups of satellites, in particular with small satellites. There are two main objectives for this work. First, a flexible and distributed software architecture is presented to expand the possibilities of spacecraft autonomy and in particular autonomous motion in attitude and position. The approach taken is based on the concept of distributed software agents, also referred to as multi-agent robotic system. Agents are defined as software programs that are social, reactive and proactive to autonomously maximize the chances of achieving the set goals. Part of the work is to demonstrate that a multi-agent robotic system is a feasible approach for different problems of autonomy such as satellite attitude determination and control and autonomous rendezvous and docking. The second main objective is to develop a method to optimize multi-satellite configurations in space, also known as satellite constellations. This automated method generates new optimal mega-constellations designs for Earth observations and fast revisit times on large ground areas. The optimal satellite constellation can be used by researchers as the baseline for new missions. The first contribution of this work is the development of a new multi-agent robotic system for distributing the attitude determination and control subsystem for HiakaSat. The multi-agent robotic system is implemented and tested on the satellite hardware-in-the-loop testbed that simulates a representative space environment. The results show that the newly proposed system for this particular case achieves an equivalent control performance when compared to the monolithic implementation. In terms on computational efficiency it is found that the multi-agent robotic system has a consistent lower CPU load of 0.29 +/- 0.03 compared to 0.35 +/- 0.04 for the monolithic implementation, a 17.1 % reduction. The second contribution of this work is the development of a multi-agent robotic system for the autonomous rendezvous and docking of multiple spacecraft. To compute the maneuvers guidance, navigation and control algorithms are implemented as part of the multi-agent robotic system. The navigation and control functions are implemented using existing algorithms, but one important contribution of this section is the introduction of a new six degrees of freedom guidance method which is part of the guidance, navigation and control architecture. This new method is an explicit solution to the guidance problem, and is particularly useful for real time guidance for attitude and position, as opposed to typical guidance methods which are based on numerical solutions, and therefore are computationally intensive. A simulation scenario is run for docking four CubeSats deployed radially from a launch vehicle. Considering fully actuated CubeSats, the simulations show docking maneuvers that are successfully completed within 25 minutes which is approximately 30% of a full orbital period in low earth orbit. The final section investigates the problem of optimization of satellite constellations for fast revisit time, and introduces a new method to generate different constellation configurations that are evaluated with a genetic algorithm. Two case studies are presented. The first is the optimization of a constellation for rapid coverage of the oceans of the globe in 24 hours or less. Results show that for an 80 km sensor swath width 50 satellites are required to cover the oceans with a 24 hour revisit time. The second constellation configuration study focuses on the optimization for the rapid coverage of the North Atlantic Tracks for air traffic monitoring in 3 hours or less. The results show that for a fixed swath width of 160 km and for a 3 hour revisit time 52 satellites are required.

  4. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment.

    PubMed

    Yao, Yao; Storme, Veronique; Marchal, Kathleen; Van de Peer, Yves

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.

  5. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment

    PubMed Central

    Yao, Yao; Storme, Veronique; Marchal, Kathleen

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population. PMID:28028477

  6. Planning and Control for Microassembly of Structures Composed of Stress-Engineered MEMS Microrobots

    PubMed Central

    Donald, Bruce R.; Levey, Christopher G.; Paprotny, Igor; Rus, Daniela

    2013-01-01

    We present control strategies that implement planar microassembly using groups of stress-engineered MEMS microrobots (MicroStressBots) controlled through a single global control signal. The global control signal couples the motion of the devices, causing the system to be highly underactuated. In order for the robots to assemble into arbitrary planar shapes despite the high degree of underactuation, it is desirable that each robot be independently maneuverable (independently controllable). To achieve independent control, we fabricated robots that behave (move) differently from one another in response to the same global control signal. We harnessed this differentiation to develop assembly control strategies, where the assembly goal is a desired geometric shape that can be obtained by connecting the chassis of individual robots. We derived and experimentally tested assembly plans that command some of the robots to make progress toward the goal, while other robots are constrained to remain in small circular trajectories (closed-loop orbits) until it is their turn to move into the goal shape. Our control strategies were tested on systems of fabricated MicroStressBots. The robots are 240–280 μm × 60 μm × 7–20 μm in size and move simultaneously within a single operating environment. We demonstrated the feasibility of our control scheme by accurately assembling five different types of planar microstructures. PMID:23580796

  7. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks.

    PubMed

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction.

  8. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  9. Exploring the acquisition and production of grammatical constructions through human-robot interaction with echo state networks

    PubMed Central

    Hinaut, Xavier; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2014-01-01

    One of the principal functions of human language is to allow people to coordinate joint action. This includes the description of events, requests for action, and their organization in time. A crucial component of language acquisition is learning the grammatical structures that allow the expression of such complex meaning related to physical events. The current research investigates the learning of grammatical constructions and their temporal organization in the context of human-robot physical interaction with the embodied sensorimotor humanoid platform, the iCub. We demonstrate three noteworthy phenomena. First, a recurrent network model is used in conjunction with this robotic platform to learn the mappings between grammatical forms and predicate-argument representations of meanings related to events, and the robot's execution of these events in time. Second, this learning mechanism functions in the inverse sense, i.e., in a language production mode, where rather than executing commanded actions, the robot will describe the results of human generated actions. Finally, we collect data from naïve subjects who interact with the robot via spoken language, and demonstrate significant learning and generalization results. This allows us to conclude that such a neural language learning system not only helps to characterize and understand some aspects of human language acquisition, but also that it can be useful in adaptive human-robot interaction. PMID:24834050

  10. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  11. Agent Based Intelligence in a Tetrahedral Rover

    NASA Technical Reports Server (NTRS)

    Phelps, Peter; Truszkowski, Walt

    2007-01-01

    A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.

  12. The Role of Reciprocity in Verbally Persuasive Robots.

    PubMed

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2016-08-01

    The current research examines the persuasive effects of reciprocity in the context of human-robot interaction. This is an important theoretical and practical extension of persuasive robotics by testing (1) if robots can utilize verbal requests and (2) if robots can utilize persuasive mechanisms (e.g., reciprocity) to gain human compliance. Participants played a trivia game with a robot teammate. The ostensibly autonomous robot helped (or failed to help) the participants by providing the correct (vs. incorrect) trivia answers. Then, the robot directly asked participants to complete a 15-minute task for pattern recognition. Compared to no help, results showed that a robot's prior helping behavior significantly increased the likelihood of compliance (60 percent vs. 33 percent). Interestingly, participants' evaluations toward the robot (i.e., competence, warmth, and trustworthiness) did not predict compliance. These results also provided an insightful comparison showing that participants complied at similar rates with the robot and with computer agents. This result documents a clear empirically powerful potential for the role of verbal messages in persuasive robotics.

  13. United States Army Biomedical Research and Development Laboratory Annual Progress Report FY90

    DTIC Science & Technology

    1991-01-01

    pesticide . Parallel and follow-on studies will include hydrolysis products of nerve agents , vesicants, and agents of...Division FO Fog oil FORSCOM U.S. Army Forces Command FY Fiscal year 249 GA The nerve agent tabun GB The nerve agent soman GD The nerve agent sarin GLP... Nerve Agents , Industrial Hygiene Sampling, Microbiology, Combustion Products, Liquid Gun Propellant, Organic Chemistry, Inorganic

  14. International Space Station (ISS)

    NASA Image and Video Library

    2002-06-01

    Huddled together in the Destiny laboratory of the International Space Station (ISS) are the Expedition Four crew (dark blue shirts), Expedition Five crew (medium blue shirts) and the STS-111 crew (green shirts). The Expedition Four crewmembers are, from front to back, Cosmonaut Ury I. Onufrienko, mission commander; and Astronauts Daniel W. Bursch and Carl E. Waltz, flight engineers. The ISS crewmembers are, from front to back, Astronauts Kerneth D. Cockrell, mission commander; Franklin R. Chang-Diaz, mission specialist; Paul S. Lockhart, pilot; and Philippe Perrin, mission specialist. Expedition Five crewmembers are, from front to back, Cosmonaut Valery G. Korzun, mission commander; Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. The ISS recieved a new crew, Expedition Five, replacing Expedition Four after a record-setting 196 days in space, when the Space Shuttle Orbiter Endeavour STS-111 mission visited in June 2002. Three spacewalks enabled the STS-111 crew to accomplish additional mission objectives: the delivery and installation of the Mobile Base System (MBS), which is an important part of the station's Mobile Servicing System allowing the robotic arm to travel the length of the station; the replacement of a wrist roll joint on the Station's robotic arm; and unloading supplies and science experiments from the Leonardo Multi-Purpose Logistics Module, which made its third trip to the orbital outpost. The STS-111 mission, the 14th Shuttle mission to visit the ISS, was launched on June 5, 2002 and landed June 19, 2002.

  15. Expedition Crews Four and Five and STS-111 Crew Aboard the ISS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Huddled together in the Destiny laboratory of the International Space Station (ISS) are the Expedition Four crew (dark blue shirts), Expedition Five crew (medium blue shirts) and the STS-111 crew (green shirts). The Expedition Four crewmembers are, from front to back, Cosmonaut Ury I. Onufrienko, mission commander; and Astronauts Daniel W. Bursch and Carl E. Waltz, flight engineers. The ISS crewmembers are, from front to back, Astronauts Kerneth D. Cockrell, mission commander; Franklin R. Chang-Diaz, mission specialist; Paul S. Lockhart, pilot; and Philippe Perrin, mission specialist. Expedition Five crewmembers are, from front to back, Cosmonaut Valery G. Korzun, mission commander; Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. The ISS recieved a new crew, Expedition Five, replacing Expedition Four after a record-setting 196 days in space, when the Space Shuttle Orbiter Endeavour STS-111 mission visited in June 2002. Three spacewalks enabled the STS-111 crew to accomplish additional mission objectives: the delivery and installation of the Mobile Base System (MBS), which is an important part of the station's Mobile Servicing System allowing the robotic arm to travel the length of the station; the replacement of a wrist roll joint on the Station's robotic arm; and unloading supplies and science experiments from the Leonardo Multi-Purpose Logistics Module, which made its third trip to the orbital outpost. The STS-111 mission, the 14th Shuttle mission to visit the ISS, was launched on June 5, 2002 and landed June 19, 2002.

  16. Frick, Melvin and Love in the U.S. Lab

    NASA Image and Video Library

    2008-02-13

    S122-E-008251 (13 Feb. 2008) --- Astronauts Steve Frick (top left), STS-122 commander; Leland Melvin (bottom) and Stanley Love, both mission specialists, take a moment for a photo while working the controls of the station's robotic Canadarm2 in the Destiny laboratory of the International Space Station while Space Shuttle Atlantis is docked with the station.

  17. Command generator tracker based direct model reference adaptive control of a PUMA 560 manipulator. Thesis

    NASA Technical Reports Server (NTRS)

    Swift, David C.

    1992-01-01

    This project dealt with the application of a Direct Model Reference Adaptive Control algorithm to the control of a PUMA 560 Robotic Manipulator. This chapter will present some motivation for using Direct Model Reference Adaptive Control, followed by a brief historical review, the project goals, and a summary of the subsequent chapters.

  18. Expedition 34 Crewmembers in the Cupola Module

    NASA Image and Video Library

    2012-11-27

    ISS034-E-010953 (27 Nov. 2012) --- NASA astronaut Kevin Ford (lower right), Expedition 34 commander; along with Russian cosmonauts Evgeny Tarelkin (left) and Oleg Novitskiy, both flight engineers, pose for a photo in the Cupola of the International Space Station. The Canadarm2 robotic arm's Latching End Effector (LEE) is visible through a window in the background.

  19. Stanford Aerospace Research Laboratory research overview

    NASA Technical Reports Server (NTRS)

    Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.

    1993-01-01

    Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.

  20. Review of Command and Control Models and Theory

    DTIC Science & Technology

    1990-09-01

    or psychological pressures exerted with the intent to assure that an agent or group will respond as di- rected: (p. 88). Thus, while "command and...Organizational competence criteria are defined by using a modification to the criteria outlined by Bennis. Processes are then grouped in terms of wtich...1982) at- tempted to identify the skills and behaviors, used in battalion command and control groups , that contribute to effective performance. In

  1. STS-111 Onboard Photo of Endeavour Docking With PMA-2

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The STS-111 mission, the 14th Shuttle mission to visit the International Space Station (ISS), was launched on June 5, 2002 aboard the Space Shuttle Orbiter Endeavour. On board were the STS-111 and Expedition Five crew members. Astronauts Kerneth D. Cockrell, commander; Paul S. Lockhart, pilot, and mission specialists Franklin R. Chang-Diaz and Philippe Perrin were the STS-111 crew members. Expedition Five crew members included Cosmonaut Valeri G. Korzun, commander, Astronaut Peggy A. Whitson and Cosmonaut Sergei Y. Treschev, flight engineers. Three space walks enabled the STS-111 crew to accomplish the delivery and installation of the Mobile Remote Servicer Base System (MBS), an important part of the Station's Mobile Servicing System that allows the robotic arm to travel the length of the Station, which is necessary for future construction tasks; the replacement of a wrist roll joint on the Station's robotic arm; and the task of unloading supplies and science experiments from the Leonardo multipurpose Logistics Module, which made its third trip to the orbital outpost. In this photograph, the Space Shuttle Endeavour, back dropped by the blackness of space, is docked to the pressurized Mating Adapter (PMA-2) at the forward end of the Destiny Laboratory on the ISS. A portion of the Canadarm2 is visible on the right and Endeavour's robotic arm is in full view as it is stretched out with the S0 (S-zero) Truss at its end.

  2. Robots with a gentle touch: advances in assistive robotics and prosthetics.

    PubMed

    Harwin, W S

    1999-01-01

    As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.

  3. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  4. Thermal tracking in mobile robots for leak inspection activities.

    PubMed

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  5. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    PubMed Central

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-01-01

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684

  6. Results from Testing Crew-Controlled Surface Telerobotics on the International Space Station

    NASA Technical Reports Server (NTRS)

    Bualat, Maria; Schreckenghost, Debra; Pacis, Estrellina; Fong, Terrence; Kalar, Donald; Beutter, Brent

    2014-01-01

    During Summer 2013, the Intelligent Robotics Group at NASA Ames Research Center conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover. The tests simulated portions of a proposed lunar mission, in which an astronaut in lunar orbit would remotely operate a planetary rover to deploy a radio telescope on the lunar far side. Over the course of Expedition 36, three ISS astronauts remotely operated the NASA "K10" planetary rover in an analogue lunar terrain located at the NASA Ames Research Center in California. The astronauts used a "Space Station Computer" (crew laptop), a combination of supervisory control (command sequencing) and manual control (discrete commanding), and Ku-band data communications to command and monitor K10 for 11 hours. In this paper, we present and analyze test results, summarize user feedback, and describe directions for future research.

  7. The Fourth Law of Robotics.

    ERIC Educational Resources Information Center

    Markoff, John

    1994-01-01

    Discusses intelligent software agents, or knowledge robots (knowbots), and the impact they have on the Internet. Topics addressed include ethical dilemmas; problems created by rapid growth on the Internet; new technologies that are amplifying growth; and a shift to a market economy and resulting costs. (LRW)

  8. Proceedings 3rd NASA/IEEE Workshop on Formal Approaches to Agent-Based Systems (FAABS-III)

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael (Editor); Rash, James (Editor); Truszkowski, Walt (Editor); Rouff, Christopher (Editor)

    2004-01-01

    These preceedings contain 18 papers and 4 poster presentation, covering topics such as: multi-agent systems, agent-based control, formalism, norms, as well as physical and biological models of agent-based systems. Some applications presented in the proceedings include systems analysis, software engineering, computer networks and robot control.

  9. Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.

    PubMed

    Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi

    2018-03-26

    For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    PubMed Central

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  11. Babybot: a biologically inspired developing robotic agent

    NASA Astrophysics Data System (ADS)

    Metta, Giorgio; Panerai, Francesco M.; Sandini, Giulio

    2000-10-01

    The study of development, either artificial or biological, can highlight the mechanisms underlying learning and adaptive behavior. We shall argue whether developmental studies might provide a different and potentially interesting perspective either on how to build an artificial adaptive agent, or on understanding how the brain solves sensory, motor, and cognitive tasks. It is our opinion that the acquisition of the proper behavior might indeed be facilitated because within an ecological context, the agent, its adaptive structure and the environment dynamically interact thus constraining the otherwise difficult learning problem. In very general terms we shall describe the proposed approach and supporting biological related facts. In order to further analyze these aspects from the modeling point of view, we shall demonstrate how a twelve degrees of freedom baby humanoid robot acquires orienting and reaching behaviors, and what advantages the proposed framework might offer. In particular, the experimental setup consists of five degrees-of-freedom (dof) robot head, and an off-the-shelf six dof robot manipulator, both mounted on a rotating base: i.e. the torso. From the sensory point of view, the robot is equipped with two space-variant cameras, an inertial sensor simulating the vestibular system, and proprioceptive information through motor encoders. The biological parallel is exploited at many implementation levels. It is worth mentioning, for example, the space- variant eyes, exploiting foveal and peripheral vision in a single arrangement, the inertial sensor providing efficient image stabilization (vestibulo-ocular reflex).

  12. The MITy micro-rover: Sensing, control, and operation

    NASA Technical Reports Server (NTRS)

    Malafeew, Eric; Kaliardos, William

    1994-01-01

    The sensory, control, and operation systems of the 'MITy' Mars micro-rover are discussed. It is shown that the customized sun tracker and laser rangefinder provide internal, autonomous dead reckoning and hazard detection in unstructured environments. The micro-rover consists of three articulated platforms with sensing, processing and payload subsystems connected by a dual spring suspension system. A reactive obstacle avoidance routine makes intelligent use of robot-centered laser information to maneuver through cluttered environments. The hazard sensors include a rangefinder, inclinometers, proximity sensors and collision sensors. A 486/66 laptop computer runs the graphical user interface and programming environment. A graphical window displays robot telemetry in real time and a small TV/VCR is used for real time supervisory control. Guidance, navigation, and control routines work in conjunction with the mapping and obstacle avoidance functions to provide heading and speed commands that maneuver the robot around obstacles and towards the target.

  13. PR-PR: Cross-Platform Laboratory Automation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linshiz, G; Stawski, N; Goyal, G

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Goldenmore » Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.« less

  14. PR-PR: cross-platform laboratory automation system.

    PubMed

    Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J

    2014-08-15

    To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.

  15. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  16. Acquisition of Autonomous Behaviors by Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Peters, R. A., II; Sarkar, N.; Bodenheimer, R. E.; Brown, E.; Campbell, C.; Hambuchen, K.; Johnson, C.; Koku, A. B.; Nilas, P.; Peng, J.

    2005-01-01

    Our research achievements under the NASA-JSC grant contributed significantly in the following areas. Multi-agent based robot control architecture called the Intelligent Machine Architecture (IMA) : The Vanderbilt team received a Space Act Award for this research from NASA JSC in October 2004. Cognitive Control and the Self Agent : Cognitive control in human is the ability to consciously manipulate thoughts and behaviors using attention to deal with conflicting goals and demands. We have been updating the IMA Self Agent towards this goal. If opportunity arises, we would like to work with NASA to empower Robonaut to do cognitive control. Applications 1. SES for Robonaut, 2. Robonaut Fault Diagnostic System, 3. ISAC Behavior Generation and Learning, 4. Segway Research.

  17. Robot transparency, trust and utility

    NASA Astrophysics Data System (ADS)

    Wortham, Robert H.; Theodorou, Andreas

    2017-07-01

    As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in nonindustrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.

  18. Suitability of Agent Technology for Military Command and Control in the Future Combat System Environment

    DTIC Science & Technology

    2003-06-01

    and Multi-Agent Systems 1 no. 1 (1998): 7-38. [23] K. Sycara, A. Pannu , M. Williamson, and D. Zeng, “Distributed Intelligent Agents,” IEEE Expert 11...services that include support for mobility, security, management, persistence, and naming of agents. [i] K. Sycara, A. Pannu , M. Williamson, and D

  19. Mobile app for human-interaction with sitter robots

    NASA Astrophysics Data System (ADS)

    Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.

    2017-05-01

    Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.

  20. How Albot0 finds its way home: a novel approach to cognitive mapping using robots.

    PubMed

    Yeap, Wai K

    2011-10-01

    Much of what we know about cognitive mapping comes from observing how biological agents behave in their physical environments, and several of these ideas were implemented on robots, imitating such a process. In this paper a novel approach to cognitive mapping is presented whereby robots are treated as a species of their own and their cognitive mapping is being investigated. Such robots are referred to as Albots. The design of the first Albot, Albot0 , is presented. Albot0 computes an imprecise map and employs a novel method to find its way home. Both the map and the return-home algorithm exhibited characteristics commonly found in biological agents. What we have learned from Albot0 's cognitive mapping are discussed. One major lesson is that the spatiality in a cognitive map affords us rich and useful information and this argues against recent suggestions that the notion of a cognitive map is not a useful one. Copyright © 2011 Cognitive Science Society, Inc.

  1. Discrete event command and control for networked teams with multiple missions

    NASA Astrophysics Data System (ADS)

    Lewis, Frank L.; Hudas, Greg R.; Pang, Chee Khiang; Middleton, Matthew B.; McMurrough, Christopher

    2009-05-01

    During mission execution in military applications, the TRADOC Pamphlet 525-66 Battle Command and Battle Space Awareness capabilities prescribe expectations that networked teams will perform in a reliable manner under changing mission requirements, varying resource availability and reliability, and resource faults. In this paper, a Command and Control (C2) structure is presented that allows for computer-aided execution of the networked team decision-making process, control of force resources, shared resource dispatching, and adaptability to change based on battlefield conditions. A mathematically justified networked computing environment is provided called the Discrete Event Control (DEC) Framework. DEC has the ability to provide the logical connectivity among all team participants including mission planners, field commanders, war-fighters, and robotic platforms. The proposed data management tools are developed and demonstrated on a simulation study and an implementation on a distributed wireless sensor network. The results show that the tasks of multiple missions are correctly sequenced in real-time, and that shared resources are suitably assigned to competing tasks under dynamically changing conditions without conflicts and bottlenecks.

  2. The multi-criteria optimization for the formation of the multiple-valued logic model of a robotic agent

    NASA Astrophysics Data System (ADS)

    Bykovsky, A. Yu; Sherbakov, A. A.

    2016-08-01

    The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation.

  3. Deictic primitives for general purpose navigation

    NASA Technical Reports Server (NTRS)

    Crismann, Jill D.

    1994-01-01

    A visually-based deictic primative used as an elementary command set for general purpose navigation was investigated. It was shown that a simple 'follow your eyes' scenario is sufficient for tracking a moving target. Limitations of velocity, acceleration, and modeling of the response of the mechanical systems were enforced. Realistic paths of the robots were produced during the simulation. Scientists could remotely command a planetary rover to go to a particular rock formation that may be interesting. Similarly an expert at plant maintenance could obtain diagnostic information remotely by using deictic primitives on a mobile are used in the deictic primitives, we could imagine that the exact same control software could be used for all of these applications.

  4. The walking robot project

    NASA Technical Reports Server (NTRS)

    Williams, P.; Sagraniching, E.; Bennett, M.; Singh, R.

    1991-01-01

    A walking robot was designed, analyzed, and tested as an intelligent, mobile, and a terrain adaptive system. The robot's design was an application of existing technologies. The design of the six legs modified and combines well understood mechanisms and was optimized for performance, flexibility, and simplicity. The body design incorporated two tripods for walking stability and ease of turning. The electrical hardware design used modularity and distributed processing to drive the motors. The software design used feedback to coordinate the system and simple keystrokes to give commands. The walking machine can be easily adapted to hostile environments such as high radiation zones and alien terrain. The primary goal of the leg design was to create a leg capable of supporting a robot's body and electrical hardware while walking or performing desired tasks, namely those required for planetary exploration. The leg designers intent was to study the maximum amount of flexibility and maneuverability achievable by the simplest and lightest leg design. The main constraints for the leg design were leg kinematics, ease of assembly, degrees of freedom, number of motors, overall size, and weight.

  5. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    PubMed

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  6. Final space shuttle crew training session in the NBL

    NASA Image and Video Library

    2011-06-13

    Photograph final space shuttle crew training session in the NBL with STS-135 Mission Specialists Sandy Magnus & Rex Walheim. STS-135 Commander Chris Ferguson serves as Intravehicular suit-up lead, Pilot Doug Hurley serves as robotic arm operator. Mission Specialists Sandy Magnus & Rex Walheim in the water. Photo Date: June 13, 2011. Location: NBL - Pool Topside. Photographer: Robert Markowitz

  7. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012573 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  8. Parmitano with Robonaut 2

    NASA Image and Video Library

    2013-06-27

    ISS036-E-012571 (27 June 2013) --- European Space Agency astronaut Luca Parmitano, Expedition 36 flight engineer, works with Robonaut 2, the first humanoid robot in space, during a round of ground-commanded tests in the Destiny laboratory of the International Space Station. R2 was assembled earlier this week for several days of data takes by the payload controllers at the Marshall Space Flight Center.

  9. Expedition 34 Crewmembers in the Cupola Module

    NASA Image and Video Library

    2012-11-27

    ISS034-E-010955 (27 Nov. 2012) --- NASA astronaut Kevin Ford (lower right), Expedition 34 commander; along with Russian cosmonauts Evgeny Tarelkin (left) and Oleg Novitskiy, both flight engineers, are partially silhouetted as they pose for a photo in the Cupola of the International Space Station. The Canadarm2 robotic arm's Latching End Effector (LEE) is visible through a window in the background.

  10. Commanding Heterogeneous Multi-Robot Teams

    DTIC Science & Technology

    2014-06-01

    Coalition Battle Management Language (C-BML) Study Group Report. 2005 Fall Simulation Interoperability Workshop (05F- SIW - 041), Orlando, FL, September...NMSG-085 CIG Land Operation Demonstration. 2013 Spring Simulation Interoperability Workshop (13S- SIW -031), San Diego, CA. April 2013. [4] K...Simulation Interoperability Workshop (10F- SIW -039), Orlando, FL, September 2010. [5] M. Langerwisch, M. Ax, S. Thamke, T. Remmersmann, A. Tiderko

  11. Sensors and Algorithms for an Unmanned Surf-Zone Robot

    DTIC Science & Technology

    2015-12-01

    71 3. Data Fusion and Filtering................................................ 74 C. VIRTUAL POTENTIAL FIELD (VPF) PATH PLANNING ...iron effects are clearly seen: Soft iron de - calibration (sphere distortion) was caused by proximity of circuit boards. Offset of the center of the...information to perform global tasks such as path- planning , sensors and actuators commands, external communications, etc. Python3 is used as the primary

  12. An assisted navigation training framework based on judgment theory using sparse and discrete human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano

    2009-01-01

    This paper aims to present a new framework to train people with severe motor disabilities steering an assisted mobile robot (AMR), such as a powered wheelchair. Users with high level of motor disabilities are not able to use standard HMIs, which provide a continuous command signal (e. g. standard joystick). For this reason HMIs providing a small set of simple commands, which are sparse and discrete in time must be used (e. g. scanning interface, or brain computer interface), making very difficult to steer the AMR. In this sense, the assisted navigation training framework (ANTF) is designed to train users driving the AMR, in indoor structured environments, using this type of HMIs. Additionally it provides user characterization on steering the robot, which will later be used to adapt the AMR navigation system to human competence steering the AMR. A rule-based lens (RBL) model is used to characterize users on driving the AMR. Individual judgment performance choosing the best manoeuvres is modeled using a genetic-based policy capturing (GBPC) technique characterized to infer non-compensatory judgment strategies from human decision data. Three user models, at three different learning stages, using the RBL paradigm, are presented.

  13. Decoding static and dynamic arm and hand gestures from the JPL BioSleeve

    NASA Astrophysics Data System (ADS)

    Wolf, M. T.; Assad, C.; Stoica, A.; You, Kisung; Jethani, H.; Vernacchia, M. T.; Fromm, J.; Iwashita, Y.

    This paper presents methods for inferring arm and hand gestures from forearm surface electromyography (EMG) sensors and an inertial measurement unit (IMU). These sensors, together with their electronics, are packaged in an easily donned device, termed the BioSleeve, worn on the forearm. The gestures decoded from BioSleeve signals can provide natural user interface commands to computers and robots, without encumbering the users hands and without problems that hinder camera-based systems. Potential aerospace applications for this technology include gesture-based crew-autonomy interfaces, high degree of freedom robot teleoperation, and astronauts' control of power-assisted gloves during extra-vehicular activity (EVA). We have developed techniques to interpret both static (stationary) and dynamic (time-varying) gestures from the BioSleeve signals, enabling a diverse and adaptable command library. For static gestures, we achieved over 96% accuracy on 17 gestures and nearly 100% accuracy on 11 gestures, based solely on EMG signals. Nine dynamic gestures were decoded with an accuracy of 99%. This combination of wearableEMGand IMU hardware and accurate algorithms for decoding both static and dynamic gestures thus shows promise for natural user interface applications.

  14. 32 CFR Appendix B to Part 192 - Procedures and Reports

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the complainant's action for future reference and inform the commander of the results of the HRS... actions and results of the inquiry or investigation, and if discriminatory practices were found, written... the case file. (3) Inform the agent of the results of the inquiry by command correspondence if an...

  15. 32 CFR Appendix B to Part 192 - Procedures and Reports

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the complainant's action for future reference and inform the commander of the results of the HRS... actions and results of the inquiry or investigation, and if discriminatory practices were found, written... the case file. (3) Inform the agent of the results of the inquiry by command correspondence if an...

  16. 32 CFR Appendix B to Part 192 - Procedures and Reports

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the complainant's action for future reference and inform the commander of the results of the HRS... actions and results of the inquiry or investigation, and if discriminatory practices were found, written... the case file. (3) Inform the agent of the results of the inquiry by command correspondence if an...

  17. 32 CFR Appendix B to Part 192 - Procedures and Reports

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the complainant's action for future reference and inform the commander of the results of the HRS... actions and results of the inquiry or investigation, and if discriminatory practices were found, written... the case file. (3) Inform the agent of the results of the inquiry by command correspondence if an...

  18. 32 CFR Appendix B to Part 192 - Procedures and Reports

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the complainant's action for future reference and inform the commander of the results of the HRS... actions and results of the inquiry or investigation, and if discriminatory practices were found, written... the case file. (3) Inform the agent of the results of the inquiry by command correspondence if an...

  19. Design and implementation of a robot control system with traded and shared control capability

    NASA Technical Reports Server (NTRS)

    Hayati, S.; Venkataraman, S. T.

    1989-01-01

    Preliminary results are reported from efforts to design and develop a robotic system that will accept and execute commands from either a six-axis teleoperator device or an autonomous planner, or combine the two. Such a system should have both traded as well as shared control capability. A sharing strategy is presented whereby the overall system, while retaining positive features of teleoperated and autonomous operation, loses its individual negative features. A two-tiered shared control architecture is considered here, consisting of a task level and a servo level. Also presented is a computer architecture for the implementation of this system, including a description of the hardware and software.

  20. SPHERES

    NASA Image and Video Library

    2013-08-08

    ISS036-E-029522 (7 Aug. 2013) --- In the International Space Station’s Kibo laboratory, NASA astronaut Karen Nyberg, Expedition 36 flight engineer, conducts a session with a pair of bowling-ball-sized free-flying satellites known as Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. Nyberg and NASA astronaut Chris Cassidy (not pictured) put the miniature satellites through their paces for a dry run of the SPHERES Zero Robotics tournament scheduled for Aug. 13. Teams of middle school students from Florida, Georgia, Idaho and Massachusetts will gather at the Massachusetts Institute of Technology in Cambridge to see which teams’ algorithms do the best job of commanding the free-flying robots through a series of maneuvers and objectives.

Top